skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: James–Stein Estimator Improves Accuracy and Sample Efficiency in Human Kinematic and Metabolic Data
Abstract Human biomechanical data are often accompanied with measurement noise and behavioral variability. Errors due to such noise and variability are usually exaggerated by fewer trials or shorter trial durations and could be reduced using more trials or longer trial durations. Speeding up such data collection by lowering number of trials or trial duration, while improving the accuracy of statistical estimates, would be of particular interest in wearable robotics applications and when the human population studied is vulnerable (e.g., the elderly). Here, we propose the use of the James–Stein estimator (JSE) to improve statistical estimates with a given amount of data or reduce the amount of data needed for a given accuracy. The JSE is a shrinkage estimator that produces a uniform reduction in the summed squared errors when compared with the more familiar maximum likelihood estimator (MLE), simple averages, or other least squares regressions. When data from multiple human participants are available, an individual participant’s JSE can improve upon MLE by incorporating information from all participants, improving overall estimation accuracy on average. Here, we apply the JSE to multiple time series of kinematic and metabolic data from the following parameter estimation problems: foot placement control during level walking, energy expenditure during circle walking, and energy expenditure during resting. We show that the resulting estimates improve accuracy—that is, the James–Stein estimates have lower summed squared error from the ‘true’ value compared with more conventional estimates.  more » « less
Award ID(s):
2014506
PAR ID:
10583088
Author(s) / Creator(s):
;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Annals of Biomedical Engineering
Volume:
53
Issue:
7
ISSN:
0090-6964
Format(s):
Medium: X Size: p. 1604-1614
Size(s):
p. 1604-1614
Sponsoring Org:
National Science Foundation
More Like this
  1. Advances in artificial intelligence have inspired a paradigm shift in human neuroscience, yielding large-scale functional magnetic resonance imaging (fMRI) datasets that provide high-resolution brain responses to thousands of naturalistic visual stimuli. Because such experiments necessarily involve brief stimulus durations and few repetitions of each stimulus, achieving sufficient signal-to-noise ratio can be a major challenge. We address this challenge by introducing GLMsingle , a scalable, user-friendly toolbox available in MATLAB and Python that enables accurate estimation of single-trial fMRI responses ( glmsingle.org ). Requiring only fMRI time-series data and a design matrix as inputs, GLMsingle integrates three techniques for improving the accuracy of trial-wise general linear model (GLM) beta estimates. First, for each voxel, a custom hemodynamic response function (HRF) is identified from a library of candidate functions. Second, cross-validation is used to derive a set of noise regressors from voxels unrelated to the experiment. Third, to improve the stability of beta estimates for closely spaced trials, betas are regularized on a voxel-wise basis using ridge regression. Applying GLMsingle to the Natural Scenes Dataset and BOLD5000, we find that GLMsingle substantially improves the reliability of beta estimates across visually-responsive cortex in all subjects. Comparable improvements in reliability are also observed in a smaller-scale auditory dataset from the StudyForrest experiment. These improvements translate into tangible benefits for higher-level analyses relevant to systems and cognitive neuroscience. We demonstrate that GLMsingle: (i) helps decorrelate response estimates between trials nearby in time; (ii) enhances representational similarity between subjects within and across datasets; and (iii) boosts one-versus-many decoding of visual stimuli. GLMsingle is a publicly available tool that can significantly improve the quality of past, present, and future neuroimaging datasets sampling brain activity across many experimental conditions. 
    more » « less
  2. Abstract In virtual reality (VR), established perception–action relationships break down because of conflicting and ambiguous sensorimotor inputs, inducing walking velocity underestimations. Here, we explore the effects of realigning perceptual sensory experiences with physical movements via augmented feedback on the estimation of virtual speed. We hypothesized that providing feedback about speed would lead to concurrent perceptual improvements and that these alterations would persist once the speedometer was removed. Ten young adults used immersive VR to view a virtual hallway translating at a series of fixed speeds. Participants were tasked with matching their walking speed on a self-paced treadmill to the optic flow in the environment. Information regarding walking speed accuracy was provided during augmented feedback trials via a real-time speedometer. We measured resulting walking velocity errors, as well as kinematic gait parameters. We found that the concordance between the virtual environment and gait speeds was higher when augmented feedback was provided during the trial. Furthermore, we observed retention effects beyond the intervention period via demonstrated smaller errors in speed perception accuracy and stronger concordance between perceived and actual speeds. Together, these results highlight a potential role for augmented feedback in guiding gait strategies that deviate away from predefined internal models of locomotion. 
    more » « less
  3. Adaptive experimental designs can dramatically improve efficiency in randomized trials. But with adaptively collected data, common estimators based on sample means and inverse propensity-weighted means can be biased or heavy-tailed. This poses statistical challenges, in particular when the experimenter would like to test hypotheses about parameters that were not targeted by the data-collection mechanism. In this paper, we present a class of test statistics that can handle these challenges. Our approach is to adaptively reweight the terms of an augmented inverse propensity-weighting estimator to control the contribution of each term to the estimator’s variance. This scheme reduces overall variance and yields an asymptotically normal test statistic. We validate the accuracy of the resulting estimates and their CIs in numerical experiments and show that our methods compare favorably to existing alternatives in terms of mean squared error, coverage, and CI size. 
    more » « less
  4. Error monitoring is an essential human ability underlying learning and metacognition. In the time domain, humans possess a remarkable ability to learn and adapt to temporal intervals, yet the neural mechanisms underlying this are not well understood. Recently, we demonstrated that humans exhibit improvements in sensorimotor time estimates when given the chance to incorporate feedback from a previous trial (Bader and Wiener, 2021), suggesting that humans are metacognitively aware of their own timing errors. To test the neural basis of this metacognitive ability, human participants of both sexes underwent fMRI while they performed a visual temporal reproduction task with randomized suprasecond intervals (1.5-6s). Crucially, each trial was repeated following feedback, allowing a “re-do” to learn from the successes or errors in the initial trial. Behaviorally, we replicated our previous finding that subjects improve their performance on re-do trials despite the feedback being temporally uninformative (i.e. early or late). For neuroimaging, we observed a dissociation between estimating and reproducing time intervals, with the former more likely to engage regions associated with the default mode network (DMN), including the superior frontal gyri, precuneus, and posterior cingulate, whereas the latter activated regions associated traditionally with the “Timing Network” (TN), including the supplementary motor area (SMA), precentral gyrus, and right supramarginal gyrus. Notably, greater DMN involvement was observed in Re-do trials. Further, the extent of the DMN was greater on re-do trials, whereas for the TN it was more constrained. Finally, Task-based connectivity between these networks demonstrated higher inter-network correlation on initial trials, but primarily when estimating trials, whereas on re-do trials communication was higher during reproduction. Overall, these results suggest the DMN and TN work in concert to mediate subjective awareness of one’s sense of time for the purpose of improving timing performance. Significance StatementA finely tuned sense of time perception is imperative for everyday motor actions (e.g., hitting a baseball). Timing self-regulation requires correct assessment and updating duration estimates if necessary. Using a modified version of a classical task of time measurement, we explored the neural regions involved in error detection, time awareness, and learning to time. Reinforcing the role of the SMA in measuring temporal information and providing evidence of co-activation with the DMN, this study demonstrates that the brain overlays sensorimotor timing with a metacognitive awareness of its passage. 
    more » « less
  5. PurposeTo improve the performance of neural networks for parameter estimation in quantitative MRI, in particular when the noise propagation varies throughout the space of biophysical parameters. Theory and MethodsA theoretically well‐founded loss function is proposed that normalizes the squared error of each estimate with respective Cramér–Rao bound (CRB)—a theoretical lower bound for the variance of an unbiased estimator. This avoids a dominance of hard‐to‐estimate parameters and areas in parameter space, which are often of little interest. The normalization with corresponding CRB balances the large errors of fundamentally more noisy estimates and the small errors of fundamentally less noisy estimates, allowing the network to better learn to estimate the latter. Further, proposed loss function provides an absolute evaluation metric for performance: A network has an average loss of 1 if it is a maximally efficient unbiased estimator, which can be considered the ideal performance. The performance gain with proposed loss function is demonstrated at the example of an eight‐parameter magnetization transfer model that is fitted to phantom and in vivo data. ResultsNetworks trained with proposed loss function perform close to optimal, that is, their loss converges to approximately 1, and their performance is superior to networks trained with the standard mean‐squared error (MSE). The proposed loss function reduces the bias of the estimates compared to the MSE loss, and improves the match of the noise variance to the CRB. This performance gain translates to in vivo maps that align better with the literature. ConclusionNormalizing the squared error with the CRB during the training of neural networks improves their performance in estimating biophysical parameters. 
    more » « less