skip to main content


This content will become publicly available on May 27, 2025

Title: PyHySCO: GPU-enabled susceptibility artifact distortion correction in seconds

Over the past decade, reversed gradient polarity (RGP) methods have become a popular approach for correcting susceptibility artifacts in echo-planar imaging (EPI). Although several post-processing tools for RGP are available, their implementations do not fully leverage recent hardware, algorithmic, and computational advances, leading to correction times of several minutes per image volume. To enable 3D RGP correction in seconds, we introduce PyTorch Hyperelastic Susceptibility Correction (PyHySCO), a user-friendly EPI distortion correction tool implemented in PyTorch that enables multi-threading and efficient use of graphics processing units (GPUs). PyHySCO uses a time-tested physical distortion model and mathematical formulation and is, therefore, reliable without training. An algorithmic improvement in PyHySCO is its use of the one-dimensional distortion correction method by Chang and Fitzpatrick to initialize the non-linear optimization. PyHySCO is published under the GNU public license and can be used from the command line or its Python interface. Our extensive numerical validation using 3T and 7T data from the Human Connectome Project suggests that PyHySCO can achieve accuracy comparable to that of leading RGP tools at a fraction of the cost. We also validate the new initialization scheme, compare different optimization algorithms, and test the algorithm on different hardware and arithmetic precisions.

 
more » « less
Award ID(s):
1751636 2038118
NSF-PAR ID:
10512666
Author(s) / Creator(s):
;
Publisher / Repository:
Fontiers in Neuroscience
Date Published:
Journal Name:
Frontiers in Neuroscience
Volume:
18
ISSN:
1662-453X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Purpose

    We propose and evaluate a new structured low‐rank method for echo‐planar imaging (EPI) ghost correction called Robust Autocalibrated LORAKS (RAC‐LORAKS). The method can be used to suppress EPI ghosts arising from the differences between different readout gradient polarities and/or the differences between different shots. It does not require conventional EPI navigator signals, and is robust to imperfect autocalibration data.

    Methods

    Autocalibrated LORAKS is a previous structured low‐rank method for EPI ghost correction that uses GRAPPA‐type autocalibration data to enable high‐quality ghost correction. This method works well when the autocalibration data are pristine, but performance degrades substantially when the autocalibration information is imperfect. RAC‐LORAKS generalizes Autocalibrated LORAKS in two ways. First, it does not completely trust the information from autocalibration data, and instead considers the autocalibration and EPI data simultaneously when estimating low‐rank matrix structure. Second, it uses complementary information from the autocalibration data to improve EPI reconstruction in a multi‐contrast joint reconstruction framework. RAC‐LORAKS is evaluated using simulations and in vivo data, including comparisons to state‐of‐the‐art methods.

    Results

    RAC‐LORAKS is demonstrated to have good ghost elimination performance compared to state‐of‐the‐art methods in several complicated EPI acquisition scenarios (including gradient‐echo brain imaging, diffusion‐encoded brain imaging, and cardiac imaging).

    Conclusions

    RAC‐LORAKS provides effective suppression of EPI ghosts and is robust to imperfect autocalibration data.

     
    more » « less
  2. null (Ed.)
    Most modern commodity imaging systems we use directly for photography—or indirectly rely on for downstream applications—employ optical systems of multiple lenses that must balance deviations from perfect optics, manufacturing constraints, tolerances, cost, and footprint. Although optical designs often have complex interactions with downstream image processing or analysis tasks, today’s compound optics are designed in isolation from these interactions. Existing optical design tools aim to minimize optical aberrations, such as deviations from Gauss’ linear model of optics, instead of application-specific losses, precluding joint optimization with hardware image signal processing (ISP) and highly parameterized neural network processing. In this article, we propose an optimization method for compound optics that lifts these limitations. We optimize entire lens systems jointly with hardware and software image processing pipelines, downstream neural network processing, and application-specific end-to-end losses. To this end, we propose a learned, differentiable forward model for compound optics and an alternating proximal optimization method that handles function compositions with highly varying parameter dimensions for optics, hardware ISP, and neural nets. Our method integrates seamlessly atop existing optical design tools, such as Zemax . We can thus assess our method across many camera system designs and end-to-end applications. We validate our approach in an automotive camera optics setting—together with hardware ISP post processing and detection—outperforming classical optics designs for automotive object detection and traffic light state detection. For human viewing tasks, we optimize optics and processing pipelines for dynamic outdoor scenarios and dynamic low-light imaging. We outperform existing compartmentalized design or fine-tuning methods qualitatively and quantitatively, across all domain-specific applications tested. 
    more » « less
  3. Over the past decade, machine learning model complexity has grown at an extraordinary rate, as has the scale of the systems training such large models. However there is an alarmingly low hardware utilization (5-20%) in large scale AI systems. The low system utilization is a cumulative effect of minor losses across different layers of the stack, exacerbated by the disconnect between engineers designing different layers spanning across different industries. To address this challenge, in this work we designed a cross-stack performance modelling and design space exploration framework. First, we introduce CrossFlow, a novel framework that enables cross-layer analysis all the way from the technology layer to the algorithmic layer. Next, we introduce DeepFlow (built on top of CrossFlow using machine learning techniques) to automate the design space exploration and co-optimization across different layers of the stack. We have validated CrossFlow’s accuracy with distributed training on real commercial hardware and showcase several DeepFlow case studies demonstrating pitfalls of not optimizing across the technology-hardware-software stack for what is likely, the most important workload driving large development investments in all aspects of computing stack.

     
    more » « less
  4. Year after year, computing systems continue to grow in complexity at an exponential rate. While this can have far-ranging positive impacts on society, it has become extremely difficult to ensure the security of these systems in the field. Hardware security - in conjunction with more traditional cybersecurity topics like software and network security - is critical for designing secure systems. Moving forward, hardware security education must ensure the next generation of engineers have the knowledge and tools to address this growing challenge. A good foundation in hardware security draws on concepts from several different fields, including fundamental hardware design principles, signal processing and statistics, and even machine learning for modeling complex physical processes. It can be difficult to convey the material in a manageable way, even to advanced undergraduate students. In this paper, we describe how we have leveraged Python, and its rich ecosystem of open-source libraries, and scaffolding with Jupyter notebooks, to bridge the gap between theory and implementation of hardware security topics, helping students learn through experience. 
    more » « less
  5. Graph neural networks (GNNs) are the primary tool for processing graph-structured data. Unfortunately, the most commonly used GNNs, called Message Passing Neural Networks (MPNNs) suffer from several fundamental limitations. To overcome these limitations, recent works have adapted the idea of positional encodings to graph data. This paper draws inspiration from the recent success of Laplacian-based positional encoding and defines a novel family of positional encoding schemes for graphs. We accomplish this by generalizing the optimization problem that defines the Laplace embedding to more general dissimilarity functions rather than the 2-norm used in the original formulation. This family of positional encodings is then instantiated by considering p-norms. We discuss a method for calculating these positional encoding schemes, implement it in PyTorch and demonstrate how the resulting positional encoding captures different properties of the graph. Furthermore, we demonstrate that this novel family of positional encodings can improve the expressive power of MPNNs. Lastly, we present preliminary experimental results. 
    more » « less