skip to main content


The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 5:00 PM ET until 11:00 PM ET on Friday, June 21 due to maintenance. We apologize for the inconvenience.

Search for: All records

Award ID contains: 1718310

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
  2. null (Ed.)
    Neuroimaging data typically undergoes several preprocessing steps before further analysis and mining can be done. Affine image registration is one of the important tasks during preprocessing. Recently, several image registration methods which are based on Convolutional Neural Networks have been proposed. However, due to the high computational and memory requirements of CNNs, these methods cannot be used in real-time for large neuroimaging data like fMRI. In this paper, we propose a Dual-Attention Recurrent Network (DRN) which uses a hard attention mechanism to allow the model to focus on small, but task-relevant, parts of the input image – thus reducing computational and memory costs. Furthermore, DRN naturally supports inhomogeneity between the raw input image (e.g., functional MRI) and the image we want to align it to (e.g., anatomical MRI) so it can be applied to harder registration tasks such as fMRI coregistration and normalization. Extensive experiments on two different datasets demonstrate that DRN significantly reduces the computational and memory costs compared with other neural network-based methods without sacrificing the quality of image registration 
    more » « less
  3. null (Ed.)
  4. null (Ed.)
    Attention-based image classification has gained increasing popularity in recent years. State-of-the-art methods for attention-based classification typically require a large training set and operate under the assumption that the label of an image depends solely on a single object (i.e., region of interest) in the image. However, in many real-world applications (e.g., medical imaging), it is very expensive to collect a large training set. Moreover, the label of each image is usually determined jointly by multiple regions of interest (ROIs). Fortunately, for such applications, it is often possible to collect the locations of the ROIs in each training image. In this paper, we study the problem of guided multi-attention classification, the goal of which is to achieve high accuracy under the dual constraints of (1) small sample size, and (2) multiple ROIs for each image. We propose a model, called Guided Attention Recurrent Network (GARN), for multi-attention classification. Different from existing attention-based methods, GARN utilizes guidance information regarding multiple ROIs thus allowing it to work well even when sample size is small. Empirical studies on three different visual tasks show that our guided attention approach can effectively boost model performance for multi-attention image classification. 
    more » « less
  5. null (Ed.)
    One of the primary tasks in neuroimaging is to simplify spatiotemporal scans of the brain (i.e., fMRI scans) by partitioning the voxels into a set of functional brain regions. An emerging line of research utilizes multiple fMRI scans, from a group of subjects, to calculate a single group consensus functional partition. This consensus-based approach is promising as it allows the model to improve the signalto-noise ratio in the data. However, existing approaches are primarily non-parametric which poses problems when new samples are introduced. Furthermore, most existing approaches calculate a single partition for multiple subjects which fails to account for the functional and anatomical variability between different subjects. In this work, we study the problem of group-cohesive functional brain region discovery where the goal is to use information from a group of subjects to learn “group-cohesive” but individualized brain partitions for multiple fMRI scans. This problem is challenging since neuroimaging datasets are usually quite small and noisy. We introduce a novel deep parametric model based upon graph convolution, called the Brain Region Extraction Network (BREN). By treating the fMRI data as a graph, we are able to integrate information from neighboring voxels during brain region discovery which helps reduce noise for each subject. Our model is trained with a Siamese architecture to encourage partitions that are group-cohesive. Experiments on both synthetic and real-world data show the effectiveness of our proposed approach. 
    more » « less
  6. null (Ed.)