skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Case Study of the Validity of Web-based Visuomotor Rotation Experiments
Abstract Web-based experiments are gaining momentum in motor learning research because of the desire to increase statistical power, decrease overhead for human participant experiments, and utilize a more demographically inclusive sample population. However, there is a vital need to understand the general feasibility and considerations necessary to shift tightly controlled human participant experiments to an online setting. We developed and deployed an online experimental platform modeled after established in-laboratory visuomotor rotation experiments to serve as a case study examining remotely collected data quality for an 80-min experiment. Current online motor learning experiments have thus far not exceeded 60 min, and current online crowdsourced studies have a median duration of approximately 10 min. Thus, the impact of a longer-duration, web-based experiment is unknown. We used our online platform to evaluate perturbation-driven motor adaptation behavior under three rotation sizes (±10°, ±35°, and ±65°) and two sensory uncertainty conditions. We hypothesized that our results would follow predictions by the relevance estimation hypothesis. Remote execution allowed us to double (n = 49) the typical participant population size from similar studies. Subsequently, we performed an in-depth examination of data quality by analyzing single-trial data quality, participant variability, and potential temporal effects across trials. Results replicated in-laboratory findings and provided insight on the effect of induced sensory uncertainty on the relevance estimation hypothesis. Our experiment also highlighted several specific challenges associated with online data collection including potentially smaller effect sizes, higher data variability, and lower recommended experiment duration thresholds. Overall, online paradigms present both opportunities and challenges for future motor learning research.  more » « less
Award ID(s):
1934792
PAR ID:
10475135
Author(s) / Creator(s):
;
Publisher / Repository:
DOI PREFIX: 10.1162
Date Published:
Journal Name:
Journal of Cognitive Neuroscience
Volume:
36
Issue:
1
ISSN:
0898-929X
Format(s):
Medium: X Size: p. 71-94
Size(s):
p. 71-94
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Human cortex is patterned by a complex and interdigitated web of large-scale functional networks. Recent methodological breakthroughs reveal variation in the size, shape, and spatial topography of cortical networks across individuals. While spatial network organization emerges across development, is stable over time, and is predictive of behavior, it is not yet clear to what extent genetic factors underlie interindividual differences in network topography. Here, leveraging a nonlinear multidimensional estimation of heritability, we provide evidence that individual variability in the size and topographic organization of cortical networks are under genetic control. Using twin and family data from the Human Connectome Project ( n = 1,023), we find increased variability and reduced heritability in the size of heteromodal association networks ( h 2 : M = 0.34, SD = 0.070), relative to unimodal sensory/motor cortex ( h 2 : M = 0.40, SD = 0.097). We then demonstrate that the spatial layout of cortical networks is influenced by genetics, using our multidimensional estimation of heritability ( h 2 - multi; M = 0.14, SD = 0.015). However, topographic heritability did not differ between heteromodal and unimodal networks. Genetic factors had a regionally variable influence on brain organization, such that the heritability of network topography was greatest in prefrontal, precuneus, and posterior parietal cortex. Taken together, these data are consistent with relaxed genetic control of association cortices relative to primary sensory/motor regions and have implications for understanding population-level variability in brain functioning, guiding both individualized prediction and the interpretation of analyses that integrate genetics and neuroimaging. 
    more » « less
  2. Behavioral experiments with infants are generally costly, and developmental scientists often struggle with recruiting participants. Online experiments are an effective approach to address these issues by offering alternative routes to expand sample sizes and access more diverse populations. However, data collection procedures in online experiments have not been sufficiently established. Differences in procedures between laboratory and online experiments can lead to other issues such as decreased data quality and the need for preprocessing. Moreover, data collection platforms for non-English speaking participants remain scarce. This article introduces the Japanese version of Lookit, a platform dedicated to online looking-time experiments for infants. Lookit is integrated into Children Helping Science, a broader platform for online developmental studies operated by the Massachusetts Institute of Technology (Cambridge, MA, USA). In addition, we review the state-of-the-art of automated gaze coding algorithms for infant studies and provide methodological considerations that researchers should consider when conducting online experiments. We hope this article will serve as a starting point for promoting online experiments with young children in Japan and contribute to creating a more robust developmental science. 
    more » « less
  3. Randomized A/B tests within online learning platforms represent an exciting direction in learning sciences. With minimal assumptions, they allow causal effect estimation without confounding bias and exact statistical inference even in small samples. However, often experimental samples and/or treatment effects are small, A/B tests are underpowered, and effect estimates are overly imprecise. Recent methodological advances have shown that power and statistical precision can be substantially boosted by coupling design-based causal estimation to machine-learning models of rich log data from historical users who were not in the experiment. Estimates using these techniques remain unbiased and inference remains exact without any additional assumptions. This paper reviews those methods and applies them to a new dataset including over 250 randomized A/B comparisons conducted within ASSISTments, an online learning platform. We compare results across experiments using four novel deep-learning models of auxiliary data and show that incorporating auxiliary data into causal estimates is roughly equivalent to increasing the sample size by 20% on average, or as much as 50-80% in some cases, relative to t-tests, and by about 10% on average, or as much as 30-50%, compared to cutting-edge machine learning unbiased estimates that use only data from the experiments. We show that the gains can be even larger for estimating subgroup effects, hold even when the remnant is unrepresentative of the A/B test sample, and extend to post-stratification population effects estimators. 
    more » « less
  4. Randomized A/B tests within online learning platforms represent an exciting direction in learning sci- ences. With minimal assumptions, they allow causal effect estimation without confounding bias and exact statistical inference even in small samples. However, often experimental samples and/or treat- ment effects are small, A/B tests are under-powered, and effect estimates are overly imprecise. Recent methodological advances have shown that power and statistical precision can be substantially boosted by coupling design-based causal estimation to machine-learning models of rich log data from historical users who were not in the experiment. Estimates using these techniques remain unbiased and inference remains exact without any additional assumptions. This paper reviews those methods and applies them to a new dataset including over 250 randomized A/B comparisons conducted within ASSISTments, an online learning platform. We compare results across experiments using four novel deep-learning models of auxiliary data, and show that incorporating auxiliary data into causal estimates is roughly equivalent to increasing the sample size by 20% on average, or as much as 50-80% in some cases, relative to t-tests, and by about 10% on average, or as much as 30-50%, compared to cutting-edge machine learning unbiased estimates that use only data from the experiments. We show the gains can be even larger for estimating subgroup effects, that they hold even when the remnant is unrepresentative of the A/B test sample, and extend to post-stratification population effects estimators. 
    more » « less
  5. Randomized A/B tests within online learning platforms represent an exciting direction in learning sciences. With minimal assumptions, they allow causal effect estimation without confounding bias and exact statistical inference even in small samples. However, often experimental samples and/or treatment effects are small, A/B tests are under-powered, and effect estimates are overly imprecise. Recent methodological advances have shown that power and statistical precision can be substantially boosted by coupling design-based causal estimation to machine-learning models of rich log data from historical users who were not in the experiment. Estimates using these techniques remain unbiased and inference remains exact without any additional assumptions. This paper reviews those methods and applies them to a new dataset including over 250 randomized A/B comparisons conducted within ASSISTments, an online learning platform. We compare results across experiments using four novel deep-learning models of auxiliary data, and show that incorporating auxiliary data into causal estimates is roughly equivalent to increasing the sample size by 20% on average, or as much as 50-80% in some cases, relative to t-tests, and by about 10% on average, or as much as 30-50%, compared to cutting-edge machine learning unbiased estimates that use only data from the experiments. We show the gains can be even larger for estimating subgroup effects, that they hold even when the remnant is unrepresentative of the A/B test sample, and extend to post-stratification population effects estimators. 
    more » « less