skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "An, Hongyu"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Deformable image registration (DIR) is an active research topic in biomedical imaging. There is a growing interest in developing DIR methods based on deep learning (DL). A traditional DL approach to DIR is based on training a convolutional neural network (CNN) to estimate the registration field between two input images. While conceptually simple, this approach comes with a limitation that it exclusively relies on a pre-trained CNN without explicitly enforcing fidelity between the registered image and the reference. We present plug-and-play image registration network (PIRATE) as a new DIR method that addresses this issue by integrating an explicit data-fidelity penalty and a CNN prior. PIRATE pre-trains a CNN denoiser on the registration field and "plugs" it into an iterative method as a regularizer. We additionally present PIRATE+ that fine-tunes the CNN prior in PIRATE using deep equilibrium models (DEQ). PIRATE+ interprets the fixed-point iteration of PIRATE as a network with effectively infinite layers and then trains the resulting network end-to-end, enabling it to learn more task-specific information and boosting its performance. Our numerical results on OASIS and CANDI datasets show that our methods achieve state-of-the-art performance on DIR. 
    more » « less
    Free, publicly-accessible full text available June 7, 2025
  2. Introduction

    Parkinson’s disease (PD) is a neurodegenerative disorder affecting millions of patients. Closed-Loop Deep Brain Stimulation (CL-DBS) is a therapy that can alleviate the symptoms of PD. The CL-DBS system consists of an electrode sending electrical stimulation signals to a specific region of the brain and a battery-powered stimulator implanted in the chest. The electrical stimuli in CL-DBS systems need to be adjusted in real-time in accordance with the state of PD symptoms. Therefore, fast and precise monitoring of PD symptoms is a critical function for CL-DBS systems. However, the current CL-DBS techniques suffer from high computational demands for real-time PD symptom monitoring, which are not feasible for implanted and wearable medical devices.

    Methods

    In this paper, we present an energy-efficient neuromorphic PD symptom detector using memristive three-dimensional integrated circuits (3D-ICs). The excessive oscillation at beta frequencies (13–35 Hz) at the subthalamic nucleus (STN) is used as a biomarker of PD symptoms.

    Results

    Simulation results demonstrate that our neuromorphic PD detector, implemented with an 8-layer spiking Long Short-Term Memory (S-LSTM), excels in recognizing PD symptoms, achieving a training accuracy of 99.74% and a validation accuracy of 99.52% for a 75%–25% data split. Furthermore, we evaluated the improvement of our neuromorphic CL-DBS detector using NeuroSIM. The chip area, latency, energy, and power consumption of our CL-DBS detector were reduced by 47.4%, 66.63%, 65.6%, and 67.5%, respectively, for monolithic 3D-ICs. Similarly, for heterogeneous 3D-ICs, employing memristive synapses to replace traditional Static Random Access Memory (SRAM) resulted in reductions of 44.8%, 64.75%, 65.28%, and 67.7% in chip area, latency, and power usage.

    Discussion

    This study introduces a novel approach for PD symptom evaluation by directly utilizing spiking signals from neural activities in the time domain. This method significantly reduces the time and energy required for signal conversion compared to traditional frequency domain approaches. The study pioneers the use of neuromorphic computing and memristors in designing CL-DBS systems, surpassing SRAM-based designs in chip design area, latency, and energy efficiency. Lastly, the proposed neuromorphic PD detector demonstrates high resilience to timing variations in brain neural signals, as confirmed by robustness analysis.

     
    more » « less
    Free, publicly-accessible full text available December 15, 2024
  3. Plug-and-play (PnP) prior is a well-known class of methods for solving imaging inverse problems by computing fixed-points of operators combining physical measurement models and learned image denoisers. While PnP methods have been extensively used for image recovery with known measurement operators, there is little work on PnP for solving blind inverse problems. We address this gap by presenting a new block-coordinate PnP (BC-PnP) method that efficiently solves this joint estimation problem by introducing learned denoisers as priors on both the unknown image and the unknown measurement operator. We present a new convergence theory for BC-PnP compatible with blind inverse problems by considering nonconvex data-fidelity terms and expansive denoisers. Our theory analyzes the convergence of BC-PnP to a stationary point of an implicit function associated with an approximate minimum mean-squared error (MMSE) denoiser. We numerically validate our method on two blind inverse problems: automatic coil sensitivity estimation in magnetic resonance imaging (MRI) and blind image deblurring. Our results show that BC-PnP provides an efficient and principled framework for using denoisers as PnP priors for jointly estimating measurement operators and images. 
    more » « less
    Free, publicly-accessible full text available December 10, 2024
  4. Deep learning accomplishes remarkable success through training with massively labeled datasets. However, the high demands on the datasets impede the feasibility of deep learning in edge computing scenarios and suffer the data scarcity issue. Rather than relying on labeled data, animals learn by interacting with their surroundings and memorizing the relationship between concurrent events. This learning paradigm is referred to as associative memory. The successful implementation of associative memory potentially achieves self-learning schemes analogous to animals to resolve the challenges of deep learning. The state-of-the-art implementations of associative memory are limited to small-scale and offline paradigms. Thus, in this work, we implement associative memory learning with an Unmanned Ground Vehicle (UGV) and neuromorphic chips (Intel Loihi) for an online learning scenario. Our system reproduces the classic associative memory in rats. In specific, our system successfully reproduces the fear conditioning with no pretraining procedure and labeled datasets. In our experiments, the UGV serves as a substitute for the rats. Our UGV autonomously memorizes the cause-and-effect of the light stimulus and vibration stimulus, then exhibits a movement response. During associative memory learning, the synaptic weights are updated by Hebbian learning. The Intel Loihi chip is integrated with our online learning system for processing visual signals. Its average power usages for computing logic and memory are 30 mW and 29 mW, respectively. 
    more » « less
  5. Fear conditioning is a behavioral paradigm of learning to predict aversive events. It is a form of associative learning that memorizes an undesirable stimulus (e.g., an electrical shock) and a neutral stimulus (e.g., a tone), resulting in a fear response (such as running away) to the originally neutral stimulus. The association of concurrent events is implemented by strengthening the synaptic connection between the neurons. In this paper, with an analogous methodology, we reproduce the classic fear conditioning experiment of rats using mobile robots and a neuromorphic system. In our design, the acceleration from a vibration platform substitutes the undesirable stimulus in rats. Meanwhile, the brightness of light (dark vs. light) is used for a neutral stimulus, which is analogous to the neutral sound in fear conditioning experiments in rats. The brightness of the light is processed with sparse coding in the Intel Loihi chip. The simulation and experimental results demonstrate that our neuromorphic robot successfully, for the first time, reproduces the fear conditioning experiment of rats with a mobile robot. The work exhibits a potential online learning paradigm with no labeled data required. The mobile robot directly memorizes the events by interacting with its surroundings, essentially different from data-driven methods. 
    more » « less
  6. Abstract INTRODUCTION

    Vascular damage in Alzheimer's disease (AD) has shown conflicting findings particularly when analyzing longitudinal data. We introduce white matter hyperintensity (WMH) longitudinal morphometric analysis (WLMA) that quantifies WMH expansion as the distance from lesion voxels to a region of interest boundary.

    METHODS

    WMH segmentation maps were derived from 270 longitudinal fluid‐attenuated inversion recovery (FLAIR) ADNI images. WLMA was performed on five data‐driven WMH patterns with distinct spatial distributions. Amyloid accumulation was evaluated with WMH expansion across the five WMH patterns.

    RESULTS

    The preclinical group had significantly greater expansion in the posterior ventricular WM compared to controls. Amyloid significantly associated with frontal WMH expansion primarily within AD individuals. WLMA outperformed WMH volume changes for classifying AD from controls primarily in periventricular and posterior WMH.

    DISCUSSION

    These data support the concept that localized WMH expansion continues to proliferate with amyloid accumulation throughout the entirety of the disease in distinct spatial locations.

     
    more » « less
  7. Parallel magnetic resonance imaging (MRI) is a widely-used technique that accelerates data collection by making use of the spatial encoding provided by multiple receiver coils. A key issue in parallel MRI is the estimation of coil sensitivity maps (CSMs) that are used for reconstructing a single high-quality image. This paper addresses this issue by developing SS-JIRCS, a new self-supervised model-based deep-learning (DL) method for image reconstruction that is equipped with automated CSM calibration. Our deep network consists of three types of modules: data-consistency, regularization, and CSM calibration. Unlike traditional supervised DL methods, these modules are directly trained on undersampled and noisy k-space data rather than on fully sampled high-quality ground truth. We present empirical results on simulated data that show the potential of the proposed method for achieving better performance than several baseline methods. 
    more » « less
  8. null (Ed.)
    Deep Neural Networks (DNNs), a brain-inspired learning methodology, requires tremendous data for training before performing inference tasks. The recent studies demonstrate a strong positive correlation between the inference accuracy and the size of the DNNs and datasets, which leads to an inevitable demand for large DNNs. However, conventional memory techniques are not adequate to deal with the drastic growth of dataset and neural network size. Recently, a resistive memristor has been widely considered as the next generation memory device owing to its high density and low power consumption. Nevertheless, its high switching resistance variations (cycle-tocycle) restrict its feasibility in deep learning. In this work, a novel memristor configuration with the enhanced heat dissipation feature is fabricated and evaluated to address this challenge. Our experimental results demonstrate our memristor reduces the resistance variation by 30% and the inference accuracy increases correspondingly in a similar range. The accuracy increment is evaluated by our Deep Delay-feed-back (Deep-DFR) reservoir computing model. The design area, power consumption, and latency are reduced by 48%, 42%, and 67%, respectively, compared to the conventional SRAM memory technique (6T). The performance of our memristor is improved at various degrees ( 13%-73%) compared to the state-of-the-art memristors. 
    more » « less
  9. Associative memory is a widespread self-learning method in biological livings, which enables the nervous system to remember the relationship between two concurrent events. The significance of rebuilding associative memory at a behavior level is not only to reveal a way of designing a brain-like self-learning neuromorphic system but also to explore a method of comprehending the learning mechanism of a nervous system. In this paper, an associative memory learning at a behavior level is realized that successfully associates concurrent visual and auditory information together (pronunciation and image of digits). The task is achieved by associating the large-scale artificial neural networks (ANNs) together instead of relating multiple analog signals. In this way, the information carried and preprocessed by these ANNs can be associated. A neuron has been designed, named signal intensity encoding neurons (SIENs), to encode the output data of the ANNs into the magnitude and frequency of the analog spiking signals. Then, the spiking signals are correlated together with an associative neural network, implemented with a three-dimensional (3-D) memristor array. Furthermore, the selector devices in the traditional memristor cells limiting the design area have been avoided by our novel memristor weight updating scheme. With the novel SIENs, the 3-D memristive synapse, and the proposed memristor weight updating scheme, the simulation results demonstrate that our proposed associative memory learning method and the corresponding circuit implementations successfully associate the pronunciation and image of digits together, which mimics a human-like associative memory learning behavior. 
    more » « less