skip to main content


Search for: All records

Creators/Authors contains: "An, Hongyu"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract INTRODUCTION

    Vascular damage in Alzheimer's disease (AD) has shown conflicting findings particularly when analyzing longitudinal data. We introduce white matter hyperintensity (WMH) longitudinal morphometric analysis (WLMA) that quantifies WMH expansion as the distance from lesion voxels to a region of interest boundary.

    METHODS

    WMH segmentation maps were derived from 270 longitudinal fluid‐attenuated inversion recovery (FLAIR) ADNI images. WLMA was performed on five data‐driven WMH patterns with distinct spatial distributions. Amyloid accumulation was evaluated with WMH expansion across the five WMH patterns.

    RESULTS

    The preclinical group had significantly greater expansion in the posterior ventricular WM compared to controls. Amyloid significantly associated with frontal WMH expansion primarily within AD individuals. WLMA outperformed WMH volume changes for classifying AD from controls primarily in periventricular and posterior WMH.

    DISCUSSION

    These data support the concept that localized WMH expansion continues to proliferate with amyloid accumulation throughout the entirety of the disease in distinct spatial locations.

     
    more » « less
    Free, publicly-accessible full text available October 1, 2024
  2. Parallel magnetic resonance imaging (MRI) is a widely-used technique that accelerates data collection by making use of the spatial encoding provided by multiple receiver coils. A key issue in parallel MRI is the estimation of coil sensitivity maps (CSMs) that are used for reconstructing a single high-quality image. This paper addresses this issue by developing SS-JIRCS, a new self-supervised model-based deep-learning (DL) method for image reconstruction that is equipped with automated CSM calibration. Our deep network consists of three types of modules: data-consistency, regularization, and CSM calibration. Unlike traditional supervised DL methods, these modules are directly trained on undersampled and noisy k-space data rather than on fully sampled high-quality ground truth. We present empirical results on simulated data that show the potential of the proposed method for achieving better performance than several baseline methods. 
    more » « less
  3. null (Ed.)
    Deep Neural Networks (DNNs), a brain-inspired learning methodology, requires tremendous data for training before performing inference tasks. The recent studies demonstrate a strong positive correlation between the inference accuracy and the size of the DNNs and datasets, which leads to an inevitable demand for large DNNs. However, conventional memory techniques are not adequate to deal with the drastic growth of dataset and neural network size. Recently, a resistive memristor has been widely considered as the next generation memory device owing to its high density and low power consumption. Nevertheless, its high switching resistance variations (cycle-tocycle) restrict its feasibility in deep learning. In this work, a novel memristor configuration with the enhanced heat dissipation feature is fabricated and evaluated to address this challenge. Our experimental results demonstrate our memristor reduces the resistance variation by 30% and the inference accuracy increases correspondingly in a similar range. The accuracy increment is evaluated by our Deep Delay-feed-back (Deep-DFR) reservoir computing model. The design area, power consumption, and latency are reduced by 48%, 42%, and 67%, respectively, compared to the conventional SRAM memory technique (6T). The performance of our memristor is improved at various degrees ( 13%-73%) compared to the state-of-the-art memristors. 
    more » « less
  4. Associative memory is a widespread self-learning method in biological livings, which enables the nervous system to remember the relationship between two concurrent events. The significance of rebuilding associative memory at a behavior level is not only to reveal a way of designing a brain-like self-learning neuromorphic system but also to explore a method of comprehending the learning mechanism of a nervous system. In this paper, an associative memory learning at a behavior level is realized that successfully associates concurrent visual and auditory information together (pronunciation and image of digits). The task is achieved by associating the large-scale artificial neural networks (ANNs) together instead of relating multiple analog signals. In this way, the information carried and preprocessed by these ANNs can be associated. A neuron has been designed, named signal intensity encoding neurons (SIENs), to encode the output data of the ANNs into the magnitude and frequency of the analog spiking signals. Then, the spiking signals are correlated together with an associative neural network, implemented with a three-dimensional (3-D) memristor array. Furthermore, the selector devices in the traditional memristor cells limiting the design area have been avoided by our novel memristor weight updating scheme. With the novel SIENs, the 3-D memristive synapse, and the proposed memristor weight updating scheme, the simulation results demonstrate that our proposed associative memory learning method and the corresponding circuit implementations successfully associate the pronunciation and image of digits together, which mimics a human-like associative memory learning behavior. 
    more » « less
  5. To accelerate the training efficiency of neural network-based machine learning, a memristor-based nonlinear computing module is designed and analyzed. Nonlinear computing operation is widely needed in neuromorphic computing and deep learning. The proposed nonlinear computing module can potentially realize a monotonic nonlinear function by successively placing memristors in a series combing with a simple amplifier. The proposed module is evaluated and optimized through the Long Short-term Memory with the digit number recognition application. The proposed nonlinear computing module can reduce the chip area from microscale to nanoscale, and potentially enhance the computing efficiency to O(1) while guaranteeing accuracy. Furthermore, the impact of the resistance variation of memristor switching on the training accuracy is simulated and analyzed using Long Short-term Memory as a benchmark. 
    more » « less