Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Behavioral experiments with infants are generally costly, and developmental scientists often struggle with recruiting participants. Online experiments are an effective approach to address these issues by offering alternative routes to expand sample sizes and access more diverse populations. However, data collection procedures in online experiments have not been sufficiently established. Differences in procedures between laboratory and online experiments can lead to other issues such as decreased data quality and the need for preprocessing. Moreover, data collection platforms for non-English speaking participants remain scarce. This article introduces the Japanese version of Lookit, a platform dedicated to online looking-time experiments for infants. Lookit is integrated into Children Helping Science, a broader platform for online developmental studies operated by the Massachusetts Institute of Technology (Cambridge, MA, USA). In addition, we review the state-of-the-art of automated gaze coding algorithms for infant studies and provide methodological considerations that researchers should consider when conducting online experiments. We hope this article will serve as a starting point for promoting online experiments with young children in Japan and contribute to creating a more robust developmental science.more » « less
-
Technological advances in psychological research have enabled large-scale studies of human behavior and streamlined pipelines for automatic processing of data. However, studies of infants and children have not fully reaped these benefits because the behaviors of interest, such as gaze duration and direction, still have to be extracted from video through a laborious process of manual annotation, even when these data are collected online. Recent advances in computer vision raise the possibility of automated annotation of these video data. In this article, we built on a system for automatic gaze annotation in young children, iCatcher, by engineering improvements and then training and testing the system (referred to hereafter as iCatcher+) on three data sets with substantial video and participant variability (214 videos collected in U.S. lab and field sites, 143 videos collected in Senegal field sites, and 265 videos collected via webcams in homes; participant age range = 4 months–3.5 years). When trained on each of these data sets, iCatcher+ performed with near human-level accuracy on held-out videos on distinguishing “LEFT” versus “RIGHT” and “ON” versus “OFF” looking behavior across all data sets. This high performance was achieved at the level of individual frames, experimental trials, and study videos; held across participant demographics (e.g., age, race/ethnicity), participant behavior (e.g., movement, head position), and video characteristics (e.g., luminance); and generalized to a fourth, entirely held-out online data set. We close by discussing next steps required to fully automate the life cycle of online infant and child behavioral studies, representing a key step toward enabling robust and high-throughput developmental research.more » « less
An official website of the United States government

Full Text Available