skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Reliability of energy landscape analysis of resting‐state functional MRI data
Abstract Energy landscape analysis is a data‐driven method to analyse multidimensional time series, including functional magnetic resonance imaging (fMRI) data. It has been shown to be a useful characterization of fMRI data in health and disease. It fits an Ising model to the data and captures the dynamics of the data as movement of a noisy ball constrained on the energy landscape derived from the estimated Ising model. In the present study, we examine test–retest reliability of the energy landscape analysis. To this end, we construct a permutation test that assesses whether or not indices characterizing the energy landscape are more consistent across different sets of scanning sessions from the same participant (i.e. within‐participant reliability) than across different sets of sessions from different participants (i.e. between‐participant reliability). We show that the energy landscape analysis has significantly higher within‐participant than between‐participant test–retest reliability with respect to four commonly used indices. We also show that a variational Bayesian method, which enables us to estimate energy landscapes tailored to each participant, displays comparable test–retest reliability to that using the conventional likelihood maximization method. The proposed methodology paves the way to perform individual‐level energy landscape analysis for given data sets with a statistically controlled reliability.  more » « less
Award ID(s):
2204936
PAR ID:
10512445
Author(s) / Creator(s):
 ;  ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
European Journal of Neuroscience
Volume:
60
Issue:
3
ISSN:
0953-816X
Format(s):
Medium: X Size: p. 4265-4290
Size(s):
p. 4265-4290
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Electroencephalogram (EEG) microstate analysis entails finding dynamics of quasi-stable and generally recurrent discrete states in multichannel EEG time series data and relating properties of the estimated state-transition dynamics to observables such as cognition and behavior. While microstate analysis has been widely employed to analyze EEG data, its use remains less prevalent in functional magnetic resonance imaging (fMRI) data, largely due to the slower timescale of such data. In the present study, we extend various data clustering methods used in EEG microstate analysis to resting-state fMRI data from healthy humans to extract their state-transition dynamics. We show that the quality of clustering is on par with that for various microstate analyses of EEG data. We then develop a method for examining test–retest reliability of the discrete-state transition dynamics between fMRI sessions and show that the within-participant test–retest reliability is higher than between-participant test–retest reliability for different indices of state-transition dynamics, different networks, and different data sets. This result suggests that state-transition dynamics analysis of fMRI data could discriminate between different individuals and is a promising tool for performing fingerprinting analysis of individuals. 
    more » « less
  2. Cherifi, Hocine (Ed.)
    We review a class of energy landscape analysis method that uses the Ising model and takes multivariate time series data as input. The method allows one to capture dynamics of the data as trajectories of a ball from one basin to a different basin to yet another, constrained on the energy landscape specified by the estimated Ising model. While this energy landscape analysis has mostly been applied to functional magnetic resonance imaging (fMRI) data from the brain for historical reasons, there are emerging applications outside fMRI data and neuroscience. To inform such applications in various research fields, this review paper provides a detailed tutorial on each step of the analysis, terminologies, concepts underlying the method, and validation, as well as recent developments of extended and related methods. 
    more » « less
  3. null (Ed.)
    Abstract Background Ecological momentary assessment (EMA) is a methodology involving repeated surveys to collect in situ data that describe respondents' current or recent experiences and related contexts in their natural environments. Audiology literature investigating the test-retest reliability of EMA is scarce. Purpose This article examines the test-retest reliability of EMA in measuring the characteristics of listening contexts and listening experiences. Research Design An observational study. Study Sample Fifty-one older adults with hearing loss. Data Collection and Analysis The study was part of a larger study that examined the effect of hearing aid technologies. The larger study had four trial conditions and outcome was measured using a smartphone-based EMA system. After completing the four trial conditions, participants repeated one of the conditions to examine the EMA test-retest reliability. The EMA surveys contained questions that assessed listening context characteristics including talker familiarity, talker location, and noise location, as well as listening experiences including speech understanding, listening effort, loudness satisfaction, and hearing aid satisfaction. The data from multiple EMA surveys collected by each participant were aggregated in each of the test and retest conditions. Test-retest correlation on the aggregated data was then calculated for each EMA survey question to determine the reliability of EMA. Results At the group level, listening context characteristics and listening experience did not change between the test and retest conditions. The test-retest correlation varied across the EMA questions, with the highest being the questions that assessed talker location (median r = 1.0), reverberation (r = 0.89), and speech understanding (r = 0.85), and the lowest being the items that quantified noise location (median r = 0.63), talker familiarity (r = 0.46), listening effort (r = 0.61), loudness satisfaction (r = 0.60), and hearing aid satisfaction (r = 0.61). Conclusion Several EMA questions yielded appropriate test-retest reliability results. The lower test-retest correlations for some EMA survey questions were likely due to fewer surveys completed by participants and poorly designed questions. Therefore, the present study stresses the importance of using validated questions in EMA. With sufficient numbers of surveys completed by respondents and with appropriately designed survey questions, EMA could have reasonable test-retest reliability in audiology research. 
    more » « less
  4. Abstract The approach-avoidance task (AAT) is an implicit task that measures people’s behavioral tendencies to approach or avoid stimuli in the environment. In recent years, it has been used successfully to help explain a variety of health problems (e.g., addictions and phobias). Unfortunately, more recent AAT studies have failed to replicate earlier promising findings. One explanation for these replication failures could be that the AAT does not reliably measure approach-avoidance tendencies. Here, we first review existing literature on the reliability of various versions of the AAT. Next, we examine the AAT’s reliability in a large and diverse sample ( N  = 1077; 248 of whom completed all sessions). Using a smartphone-based, mobile AAT, we measured participants’ approach-avoidance tendencies eight times over a period of seven months (one measurement per month) in two distinct stimulus sets (happy/sad expressions and disgusting/neutral stimuli). The mobile AAT’s split-half reliability was adequate for face stimuli ( r  = .85), but low for disgust stimuli ( r  = .72). Its test–retest reliability based on a single measurement was poor for either stimulus set (all ICC1s < .3). Its test–retest reliability based on the average of all eight measurements was moderately good for face stimuli (ICCk = .73), but low for disgust stimuli (ICCk = .5). Results suggest that single-measurement AATs could be influenced by unexplained temporal fluctuations of approach-avoidance tendencies. These fluctuations could be examined in future studies. Until then, this work suggests that future research using the AAT should rely on multiple rather than single measurements. 
    more » « less
  5. Abstract ObjectiveNeuropsychological testing is essential for both clinical and basic stroke research; however, the in-person nature of this testing is a limitation. Virtual testing overcomes the hurdles of geographic location, mobility issues and permits social distancing, yet its validity has received relatively little investigation, particularly in comparison with in-person testing. MethodWe expand on our prior findings of virtual testing feasibility by assessing virtual versus in-person administration of language and communication tasks with 48 left-hemisphere stroke patients (21 F, 27 M; mean age = 63.4 ± 12; mean years of education = 15.3 ± 3.5) in a quasi-test–retest paradigm. Each participant completed two testing sessions: one in their home and one in the research lab. Participants were assigned to one of the eight groups, with the testing condition (fully in-person, partially virtual), order of home session (first, second) and technology (iPad, Windows tablet) varied across groups. ResultsAcross six speech-language tasks that utilized varying response modalities and interfaces, we found no significant difference in performance between virtual and in-person testing. However, our results reveal key considerations for successful virtual administration of neuropsychological tests, including technology complications and disparities in internet access. ConclusionsVirtual administration of neuropsychological assessments demonstrates comparable reliability with in-person data collection involving stroke survivors, though technology issues must be taken into account. 
    more » « less