skip to main content


Title: Context-aware deep learning enables high-efficacy localization of high concentration microbubbles for super-resolution ultrasound localization microscopy
Abstract

Ultrasound localization microscopy (ULM) enables deep tissue microvascular imaging by localizing and tracking intravenously injected microbubbles circulating in the bloodstream. However, conventional localization techniques require spatially isolated microbubbles, resulting in prolonged imaging time to obtain detailed microvascular maps. Here, we introduce LOcalization with Context Awareness (LOCA)-ULM, a deep learning-based microbubble simulation and localization pipeline designed to enhance localization performance in high microbubble concentrations. In silico, LOCA-ULM enhanced microbubble detection accuracy to 97.8% and reduced the missing rate to 23.8%, outperforming conventional and deep learning-based localization methods up to 17.4% in accuracy and 37.6% in missing rate reduction. In in vivo rat brain imaging, LOCA-ULM revealed dense cerebrovascular networks and spatially adjacent microvessels undetected by conventional ULM. We further demonstrate the superior localization performance of LOCA-ULM in functional ULM (fULM) where LOCA-ULM significantly increased the functional imaging sensitivity of fULM to hemodynamic responses invoked by whisker stimulations in the rat brain.

 
more » « less
NSF-PAR ID:
10499014
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Nature Communications
Volume:
15
Issue:
1
ISSN:
2041-1723
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Purpose

    To develop an improved k‐space reconstruction method using scan‐specific deep learning that is trained on autocalibration signal (ACS) data.

    Theory

    Robust artificial‐neural‐networks for k‐space interpolation (RAKI) reconstruction trains convolutional neural networks on ACS data. This enables nonlinear estimation of missing k‐space lines from acquired k‐space data with improved noise resilience, as opposed to conventional linear k‐space interpolation‐based methods, such as GRAPPA, which are based on linear convolutional kernels.

    Methods

    The training algorithm is implemented using a mean square error loss function over the target points in the ACS region, using a gradient descent algorithm. The neural network contains 3 layers of convolutional operators, with 2 of these including nonlinear activation functions. The noise performance and reconstruction quality of the RAKI method was compared with GRAPPA in phantom, as well as in neurological and cardiac in vivo data sets.

    Results

    Phantom imaging shows that the proposed RAKI method outperforms GRAPPA at high (≥4) acceleration rates, both visually and quantitatively. Quantitative cardiac imaging shows improved noise resilience at high acceleration rates (rate 4:23% and rate 5:48%) over GRAPPA. The same trend of improved noise resilience is also observed in high‐resolution brain imaging at high acceleration rates.

    Conclusion

    The RAKI method offers a training database‐free deep learning approach for MRI reconstruction, with the potential to improve many existing reconstruction approaches, and is compatible with conventional data acquisition protocols.

     
    more » « less
  2. We report the potentials of nanometer-sized contrast agents which are called gas vesicles (GVs) for super-resolution ultrasound (SRUS) imaging to diagnose of vasculature deep inside tissue. Thus, we developed the GVs and ultrasound localization microscopy (ULM) based on singular value decomposition and 2D cross-correlation techniques. Furthermore, the SRUS imaging of the vessel-mimicking phantom with the GVs was performed. These results demonstrate that GVs could have potentials as novel contrast agents at nanoscale for implementing the SRUS imaging, thus indicating that ULM with GVs would be used for better visualization of micro-vasculature in vivo. 
    more » « less
  3. Evolution has honed predatory skills in the natural world where localizing and intercepting fast-moving prey is required. The current generation of robotic systems mimics these biological systems using deep learning. High-speed processing of the camera frames using convolutional neural networks (CNN) (frame pipeline) on such constrained aerial edge-robots gets resource-limited. Adding more compute resources also eventually limits the throughput at the frame rate of the camera as frame-only traditional systems fail to capture the detailed temporal dynamics of the environment. Bio-inspired event cameras and spiking neural networks (SNN) provide an asynchronous sensor-processor pair (event pipeline) capturing the continuous temporal details of the scene for high-speed but lag in terms of accuracy. In this work, we propose a target localization system combining event-camera and SNN-based high-speed target estimation and frame-based camera and CNN-driven reliable object detection by fusing complementary spatio-temporal prowess of event and frame pipelines. One of our main contributions involves the design of an SNN filter that borrows from the neural mechanism for ego-motion cancelation in houseflies. It fuses the vestibular sensors with the vision to cancel the activity corresponding to the predator's self-motion. We also integrate the neuro-inspired multi-pipeline processing with task-optimized multi-neuronal pathway structure in primates and insects. The system is validated to outperform CNN-only processing using prey-predator drone simulations in realistic 3D virtual environments. The system is then demonstrated in a real-world multi-drone set-up with emulated event data. Subsequently, we use recorded actual sensory data from multi-camera and inertial measurement unit (IMU) assembly to show desired working while tolerating the realistic noise in vision and IMU sensors. We analyze the design space to identify optimal parameters for spiking neurons, CNN models, and for checking their effect on the performance metrics of the fused system. Finally, we map the throughput controlling SNN and fusion network on edge-compatible Zynq-7000 FPGA to show a potential 264 outputs per second even at constrained resource availability. This work may open new research directions by coupling multiple sensing and processing modalities inspired by discoveries in neuroscience to break fundamental trade-offs in frame-based computer vision 1 . 
    more » « less
  4. Localization is one form of cooperative spectrum sensing that lets multiple sensors work together to estimate the location of a target transmitter. However, the requisite exchange of spectrum measurements leads to exposure of the physical loca- tion of participating sensors. Furthermore, in some cases, a com- promised participant can reveal the sensitive characteristics of all participants. Accordingly, a lack of sufficient guarantees about data handling discourages such devices from working together. In this paper, we provide the missing data protections by processing spectrum measurements within attestable containers or enclaves. Enclaves provide runtime memory integrity and confidentiality using hardware extensions and have been used to secure various applications [1]–[8]. We use these enclave features as building blocks for new privacy-preserving particle filter protocols that minimize disruption of the spectrum sensing ecosystem. We then instantiate this enclave using ARM TrustZone and Intel SGX, and we show that enclave-based particle filter protocols incur minimal overhead (adding 16 milliseconds of processing to the measurement processing function when using SGX versus unprotected computation) and can be deployed on resource-constrained platforms that support TrustZone (incurring only a 1.01x increase in processing time when doubling particle count from 10,000 to 20,000), whereas cryptographically-based approaches suffer from multiple orders of magnitude higher costs. We effectively deploy enclaves in a distributed environment, dramatically improving current data handling techniques. To our best knowledge, this is the first work to demonstrate privacy-preserving localization in a multi-party environment with reasonable overhead. 
    more » « less
  5. Abstract

    Blind source separation (BSS) is commonly used in functional magnetic resonance imaging (fMRI) data analysis. Recently, BSS models based on restricted Boltzmann machine (RBM), one of the building blocks of deep learning models, have been shown to improve brain network identification compared to conventional single matrix factorization models such as independent component analysis (ICA). These models, however, trained RBM on fMRI volumes, and are hence challenged by model complexity and limited training set. In this article, we propose to apply RBM to fMRI time courses instead of volumes for BSS. The proposed method not only interprets fMRI time courses explicitly to take advantages of deep learning models in latent feature learning but also substantially reduces model complexity and increases the scale of training set to improve training efficiency. Our experimental results based on Human Connectome Project (HCP) datasets demonstrated the superiority of the proposed method over ICA and the one that applied RBM to fMRI volumes in identifying task‐related components, resulted in more accurate and specific representations of task‐related activations. Moreover, our method separated out components representing intermixed effects between task events, which could reflect inherent interactions among functionally connected brain regions. Our study demonstrates the value of RBM in mining complex structures embedded in large‐scale fMRI data and its potential as a building block for deeper models in fMRI data analysis.

     
    more » « less