skip to main content

Search for: All records

Creators/Authors contains: "Moore, C"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available July 12, 2023
  2. Phytoplankton photosynthetic physiology can be investigated through single-turnover variable chlorophyll fluorescence (ST-ChlF) approaches, which carry unique potential to autonomously collect data at high spatial and temporal resolution. Over the past decades, significant progress has been made in the development and application of ST-ChlF methods in aquatic ecosystems, and in the interpretation of the resulting observations. At the same time, however, an increasing number of sensor types, sampling protocols, and data processing algorithms have created confusion and uncertainty among potential users, with a growing divergence of practice among different research groups. In this review, we assist the existing and upcoming user community by providing an overview of current approaches and consensus recommendations for the use of ST-ChlF measurements to examine in-situ phytoplankton productivity and photo-physiology. We argue that a consistency of practice and adherence to basic operational and quality control standards is critical to ensuring data inter-comparability. Large datasets of inter-comparable and globally coherent ST-ChlF observations hold the potential to reveal large-scale patterns and trends in phytoplankton photo-physiology, photosynthetic rates and bottom-up controls on primary productivity. As such, they hold great potential to provide invaluable physiological observations on the scales relevant for the development and validation of ecosystem models and remotemore »sensing algorithms.« less
  3. One of the primary tasks in neuroimaging is to simplify spatiotemporal scans of the brain (i.e., fMRI scans) by partitioning the voxels into a set of functional brain regions. An emerging line of research utilizes multiple fMRI scans, from a group of subjects, to calculate a single group consensus functional partition. This consensus-based approach is promising as it allows the model to improve the signalto-noise ratio in the data. However, existing approaches are primarily non-parametric which poses problems when new samples are introduced. Furthermore, most existing approaches calculate a single partition for multiple subjects which fails to account for the functional and anatomical variability between different subjects. In this work, we study the problem of group-cohesive functional brain region discovery where the goal is to use information from a group of subjects to learn “group-cohesive” but individualized brain partitions for multiple fMRI scans. This problem is challenging since neuroimaging datasets are usually quite small and noisy. We introduce a novel deep parametric model based upon graph convolution, called the Brain Region Extraction Network (BREN). By treating the fMRI data as a graph, we are able to integrate information from neighboring voxels during brain region discovery which helps reduce noise formore »each subject. Our model is trained with a Siamese architecture to encourage partitions that are group-cohesive. Experiments on both synthetic and real-world data show the effectiveness of our proposed approach.« less
  4. Neuroimaging data typically undergoes several preprocessing steps before further analysis and mining can be done. Affine image registration is one of the important tasks during preprocessing. Recently, several image registration methods which are based on Convolutional Neural Networks have been proposed. However, due to the high computational and memory requirements of CNNs, these methods cannot be used in real-time for large neuroimaging data like fMRI. In this paper, we propose a Dual-Attention Recurrent Network (DRN) which uses a hard attention mechanism to allow the model to focus on small, but task-relevant, parts of the input image – thus reducing computational and memory costs. Furthermore, DRN naturally supports inhomogeneity between the raw input image (e.g., functional MRI) and the image we want to align it to (e.g., anatomical MRI) so it can be applied to harder registration tasks such as fMRI coregistration and normalization. Extensive experiments on two different datasets demonstrate that DRN significantly reduces the computational and memory costs compared with other neural network-based methods without sacrificing the quality of image registration
  5. Attention-based image classification has gained increasing popularity in recent years. State-of-the-art methods for attention-based classification typically require a large training set and operate under the assumption that the label of an image depends solely on a single object (i.e., region of interest) in the image. However, in many real-world applications (e.g., medical imaging), it is very expensive to collect a large training set. Moreover, the label of each image is usually determined jointly by multiple regions of interest (ROIs). Fortunately, for such applications, it is often possible to collect the locations of the ROIs in each training image. In this paper, we study the problem of guided multi-attention classification, the goal of which is to achieve high accuracy under the dual constraints of (1) small sample size, and (2) multiple ROIs for each image. We propose a model, called Guided Attention Recurrent Network (GARN), for multi-attention classification. Different from existing attention-based methods, GARN utilizes guidance information regarding multiple ROIs thus allowing it to work well even when sample size is small. Empirical studies on three different visual tasks show that our guided attention approach can effectively boost model performance for multi-attention image classification.
  6. null (Ed.)
  7. Free, publicly-accessible full text available June 1, 2023
  8. Free, publicly-accessible full text available June 1, 2023
  9. Free, publicly-accessible full text available June 1, 2023