This paper focuses on decoding the process of face verification in the human brain using fMRI responses. 2400 fMRI responses are collected from different participants while they perform face verification on genuine and imposter stimuli face pairs. The first part of the paper analyzes the responses covering both cognitive and fMRI neuro-imaging results. With an average verification accuracy of 64.79% by human participants, the results of the cognitive analysis depict that the performance of female participants is significantly higher than the male participants with respect to imposter pairs. The results of the neuroimaging analysis identifies regions of the brain such as the left fusiform gyrus, caudate nucleus, and superior frontal gyrus that are activated when participants perform face verification tasks. The second part of the paper proposes a novel two-level fMRI dictionary learning approach to predict if the stimuli observed is genuine or imposter using the brain activation data for selected regions. A comparative analysis with existing machine learning techniques illustrates that the proposed approach yields at least 4.5% higher classification accuracy than other algorithms. It is envisioned that the result of this study is the first step in designing brain-inspired automatic face verification algorithms.
more »
« less
Joint fMRI analysis and subject clustering using sparse dictionary learning
Multi-subject fMRI data analysis methods based on sparse dictionary learning are proposed. In addition to identifying the component spatial maps by exploiting the sparsity of the maps, clusters of the subjects are learned by postulating that the fMRI volumes admit a subspace clustering structure. Furthermore, in order to tune the associated hyper-parameters systematically, a cross-validation strategy is developed based on entry-wise sampling of the fMRI dataset. Efficient algorithms for solving the proposed constrained dictionary learning formulations are developed. Numerical tests performed on synthetic fMRI data show promising results and provides insights into the proposed technique.
more »
« less
- PAR ID:
- 10073490
- Date Published:
- Journal Name:
- Proc. SPIE 10394, Wavelets and Sparsity XVII
- Page Range / eLocation ID:
- 12
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The decomposition of multi-subject fMRI data using rank- (L,L,1,1) block term decomposition (BTD) can preserve higher-way data structure and is more robust to noise effects by decomposing shared spatial maps (SMs) into a product of two rank-L loading matrices. However, since the number of whole-brain voxels is very large and rank L is larger than 1, the rank-(L,L,1,1) BTD requires high computation and memory. Therefore, we propose an accelerated rank- (L,L,1,1) BTD algorithm based upon the method of alternating least squares (ALS). We speed up updates of loading matrices by reducing fMRI data into subspaces, and add an orthonormality constraint on shared SMs to improve the performance. Moreover, we evaluate the rank-L effect on the proposed method for actual task-related fMRI data. The proposed method shows better performance when L=35. Meanwhile, experimental comparison results verify that the proposed method largely reduced (17.36 times) computation time compared to ALS while also providing satisfying separation performance.more » « less
-
Abstract Data‐driven methods have been widely used in functional magnetic resonance imaging (fMRI) data analysis. They extract latent factors, generally, through the use of a simple generative model. Independent component analysis (ICA) and dictionary learning (DL) are two popular data‐driven methods that are based on two different forms of diversity—statistical properties of the data—statistical independence for ICA and sparsity for DL. Despite their popularity, the comparative advantage of emphasizing one property over another in the decomposition of fMRI data is not well understood. Such a comparison is made harder due to the differences in the modeling assumptions between ICA and DL, as well as within different ICA algorithms where each algorithm exploits a different form of diversity. In this paper, we propose the use of objective global measures, such as time course frequency power ratio, network connection summary, and graph theoretical metrics, to gain insight into the role that different types of diversity have on the analysis of fMRI data. Four ICA algorithms that account for different types of diversity and one DL algorithm are studied. We apply these algorithms to real fMRI data collected from patients with schizophrenia and healthy controls. Our results suggest that no one particular method has the best performance using all metrics, implying that the optimal method will change depending on the goal of the analysis. However, we note that in none of the scenarios we test the highly popular Infomax provides the best performance, demonstrating the cost of exploiting limited form of diversity.more » « less
-
The work examines a combined supervised-unsupervised framework involving dictionary-based blind learning and deep supervised learning or MR image reconstruction from under-sampled k-space data. A major focus of the work is to investigate the possible synergy of learned features in traditional shallow reconstruction using sparsity-based priors and deep prior-based reconstruction. Specifically, we propose a framework that uses an unrolled network to refine a blind dictionary learning based reconstruction. we compare the proposed method with strictly supervised deep learning-based reconstruction approaches on several datasets of varying sized and anatomies.more » « less
-
X-ray fluorescence (XRF) spectroscopy is a common technique in the field of heritage science. However, data processing and data interpretation remain a challenge as they are time consuming and often require a priori knowledge of the composition of the materials present in the analyzed objects. For this reason, we developed an open-source, unsupervised dictionary learning algorithm reducing the complexity of large datasets containing 10s of thousands of spectra and identifying patterns. The algorithm runs in Julia, a programming language that allows for faster data processing compared to Python and R. This approach quickly reduces the number of variables and creates correlated elemental maps, characteristic for pigments containing various elements or for pigment mixtures. This alternative approach creates an overcomplete dictionary which is learned from the input data itself, therefore reducing the a priori user knowledge. The feasibility of this method was first confirmed by applying it to a mock-up board containing various known pigment mixtures. The algorithm was then applied to a macro XRF (MA-XRF) data set obtained on an 18th century Mexican painting, and positively identified smalt (pigment characterized by the co-occurrence of cobalt, arsenic, bismuth, nickel, and potassium), mixtures of vermilion and lead white, and two complex conservation materials/interventions. Moreover, the algorithm identified correlated elements that were not identified using the traditional elemental maps approach without image processing. This approach proved very useful as it yielded the same conclusions as the traditional elemental maps approach followed by elemental maps comparison but with a much faster data processing time. Furthermore, no image processing or user manipulation was required to understand elemental correlation. This open-source, open-access, and thus freely available code running in a platform allowing faster processing and larger data sets represents a useful resource to understand better the pigments and mixtures used in historical paintings and their possible various conservation campaigns.more » « less
An official website of the United States government

