skip to main content


Title: Toward Predicting Infant Developmental Outcomes from Day-Long Inertial Motion Recordings
As improvements in medicine lower infant mortality rates, more infants with neuromotor challenges survive past birth. The motor, social, and cognitive development of these infants are closely interrelated, and challenges in any of these areas can lead to developmental differences. Thus, analyzing one of these domains - the motion of young infants - can yield insights on developmental progress to help identify individuals who would benefit most from early interventions. In the presented data collection, we gathered day-long inertial motion recordings from N = 12 typically developing (TD) infants and N = 24 infants who were classified as at risk for developmental delays (AR) due to complications at or before birth. As a first research step, we used simple machine learning methods (decision trees, k-nearest neighbors, and support vector machines) to classify infants as TD or AR based on their movement recordings and demographic data. Our next aim was to predict future outcomes for the AR infants using the same simple classifiers trained from the same movement recordings and demographic data. We achieved a 94.4% overall accuracy in classifying infants as TD or AR, and an 89.5% overall accuracy predicting future outcomes for the AR infants. The addition of inertial data was much more important to producing accurate future predictions than identification of current status. This work is an important step toward helping stakeholders to monitor the developmental progress of AR infants and identify infants who may be at the greatest risk for ongoing developmental challenges.  more » « less
Award ID(s):
1706964
NSF-PAR ID:
10162125
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
IEEE transactions on neural systems and rehabilitation engineering
ISSN:
1558-0210
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    There have been significant advances in the technologies for robot-assisted lower-limb rehabilitation in the last decade. However, the development of similar systems for children has been slow despite the fact that children with conditions such as cerebral palsy (CP), spina bifida (SB) and spinal cord injury (SCI) can benefit greatly from these technologies. Robotic assisted gait therapy (RAGT) has emerged as a way to increase gait training duration and intensity while decreasing the risk of injury to therapists. Robotic walking devices can be coupled with motion sensing, electromyography (EMG), scalp electroencephalography (EEG) or other noninvasive methods of acquiring information about the user’s intent to design Brain-Computer Interfaces (BCI) for neuromuscular rehabilitation and control of powered exoskeletons. For users with SCI, BCIs could provide a method of overground mobility closer to the natural process of the brain controlling the body’s movement during walking than mobility by wheelchair. For adults there are currently four FDA approved lower-limb exoskeletons that could be incorporated into such a BCI system, but there are no similar devices specifically designed for children, who present additional physical, neurological and cognitive developmental challenges. The current state of the art for pediatric RAGT relies on large clinical devices with high costs that limit accessibility. This can reduce the amount of therapy a child receives and slow rehabilitation progress. In many cases, lack of gait training can result in a reduction in the mobility, independence and overall quality of life for children with lower-limb disabilities. Thus, it is imperative to facilitate and accelerate the development of pediatric technologies for gait rehabilitation, including their regulatory path. In this paper an overview of the U.S. Food and Drug Administration (FDA) clearance/approval process is presented. An example device has been used to navigate important questions facing device developers focused on providing lower limb rehabilitation to children in home-based or other settings beyond the clinic. 
    more » « less
  2. Objective. Maternal stress is a psychological response to the demands of motherhood. A high level of maternal stress is a risk factor for maternal mental health problems, including depression and anxiety, as well as adverse infant socioemotional and cognitive outcomes. Yet, levels of maternal stress (i.e., levels of stress related to parenting) among low-risk samples are rarely studied longitudinally, particularly in the first year after birth. Design. We measured maternal stress in an ethnically diverse sample of low-risk, healthy U.S. mothers of healthy infants (N = 143) living in South Florida across six time points between 2 weeks and 14 months postpartum using the Parenting Stress Index-Short Form, capturing stress related to the mother, mother-infant interactions, and the infant. Results. Maternal distress increased as infants aged for mothers with more than one child, but not for first-time mothers whose distress levels remained low and stable across this period. Stress related to mother-infant dysfunctional interactions lessened over the first 8 months. Mothers’ stress about their infants’ difficulties decreased from 2 weeks to 6 months, and subsequently increased from 6 to 14 months. Conclusions. Our findings suggest that maternal stress is dynamic across the first year after birth. The current study adds to our understanding of typical developmental patterns in early motherhood and identifies potential domains and time points as targets for future interventions. 
    more » « less
  3. The PoseASL dataset consists of color and depth videos collected from ASL signers at the Linguistic and Assistive Technologies Laboratory under the direction of Matt Huenerfauth, as part of a collaborative research project with researchers at the Rochester Institute of Technology, Boston University, and the University of Pennsylvania. Access: After becoming an authorized user of Databrary, please contact Matt Huenerfauth if you have difficulty accessing this volume. We have collected a new dataset consisting of color and depth videos of fluent American Sign Language signers performing sequences ASL signs and sentences. Given interest among sign-recognition and other computer-vision researchers in red-green-blue-depth (RBGD) video, we release this dataset for use by the research community. In addition to the video files, we share depth data files from a Kinect v2 sensor, as well as additional motion-tracking files produced through post-processing of this data. Organization of the Dataset: The dataset is organized into sub-folders, with codenames such as "P01" or "P16" etc. These codenames refer to specific human signers who were recorded in this dataset. Please note that there was no participant P11 nor P14; those numbers were accidentally skipped during the process of making appointments to collect video stimuli. Task: During the recording session, the participant was met by a member of our research team who was a native ASL signer. No other individuals were present during the data collection session. After signing the informed consent and video release document, participants responded to a demographic questionnaire. Next, the data-collection session consisted of English word stimuli and cartoon videos. The recording session began with showing participants stimuli consisting of slides that displayed English word and photos of items, and participants were asked to produce the sign for each (PDF included in materials subfolder). Next, participants viewed three videos of short animated cartoons, which they were asked to recount in ASL: - Canary Row, Warner Brothers Merrie Melodies 1950 (the 7-minute video divided into seven parts) - Mr. Koumal Flies Like a Bird, Studio Animovaneho Filmu 1969 - Mr. Koumal Battles his Conscience, Studio Animovaneho Filmu 1971 The word list and cartoons were selected as they are identical to the stimuli used in the collection of the Nicaraguan Sign Language video corpora - see: Senghas, A. (1995). Children’s Contribution to the Birth of Nicaraguan Sign Language. Doctoral dissertation, Department of Brain and Cognitive Sciences, MIT. Demographics: All 14 of our participants were fluent ASL signers. As screening, we asked our participants: Did you use ASL at home growing up, or did you attend a school as a very young child where you used ASL? All the participants responded affirmatively to this question. A total of 14 DHH participants were recruited on the Rochester Institute of Technology campus. Participants included 7 men and 7 women, aged 21 to 35 (median = 23.5). All of our participants reported that they began using ASL when they were 5 years old or younger, with 8 reporting ASL use since birth, and 3 others reporting ASL use since age 18 months. Filetypes: *.avi, *_dep.bin: The PoseASL dataset has been captured by using a Kinect 2.0 RGBD camera. The output of this camera system includes multiple channels which include RGB, depth, skeleton joints (25 joints for every video frame), and HD face (1,347 points). The video resolution produced in 1920 x 1080 pixels for the RGB channel and 512 x 424 pixels for the depth channels respectively. Due to limitations in the acceptable filetypes for sharing on Databrary, it was not permitted to share binary *_dep.bin files directly produced by the Kinect v2 camera system on the Databrary platform. If your research requires the original binary *_dep.bin files, then please contact Matt Huenerfauth. *_face.txt, *_HDface.txt, *_skl.txt: To make it easier for future researchers to make use of this dataset, we have also performed some post-processing of the Kinect data. To extract the skeleton coordinates of the RGB videos, we used the Openpose system, which is capable of detecting body, hand, facial, and foot keypoints of multiple people on single images in real time. The output of Openpose includes estimation of 70 keypoints for the face including eyes, eyebrows, nose, mouth and face contour. The software also estimates 21 keypoints for each of the hands (Simon et al, 2017), including 3 keypoints for each finger, as shown in Figure 2. Additionally, there are 25 keypoints estimated for the body pose (and feet) (Cao et al, 2017; Wei et al, 2016). Reporting Bugs or Errors: Please contact Matt Huenerfauth to report any bugs or errors that you identify in the corpus. We appreciate your help in improving the quality of the corpus over time by identifying any errors. Acknowledgement: This material is based upon work supported by the National Science Foundation under award 1749376: "Collaborative Research: Multimethod Investigation of Articulatory and Perceptual Constraints on Natural Language Evolution." 
    more » « less
  4. Abstract

    Early life adversity predicts shorter adult lifespan in several animal taxa. Yet, work on long‐lived primate populations suggests the evolution of mechanisms that contribute to resiliency and long lives despite early life insults. Here, we tested associations between individual and cumulative early life adversity and lifespan on rhesus macaques at the Cayo Santiago Biological Field Station using 50 years of demographic data. We performed sex‐specific survival analyses at different life stages to contrast short‐term effects of adversity (i.e., infant survival) with long‐term effects (i.e., adult survival). Female infants showed vulnerability to multiple adversities at birth, but affected females who survived to adulthood experienced a reduced risk later in life. In contrast, male infants showed vulnerability to a lower number of adversities at birth, but those who survived to adulthood were negatively affected by both early life individual and cumulative adversity. Our study shows profound immediate effects of insults  on female infant cohorts and suggests that affected female adults are more robust. In contrast, adult males who experienced harsh conditions early in life showed an increased mortality risk at older ages as expected from hypotheses within the life course perspective. Our analysis suggests sex‐specific selection pressures on life histories and highlights the need for studies addressing the effects of early life adversity across multiple life stages.

     
    more » « less
  5. Inertial kinetics and kinematics have substantial influences on human biomechanical function. A new algorithm for Inertial Measurement Unit (IMU)-based motion tracking is presented in this work. The primary aims of this paper are to combine recent developments in improved biosensor technology with mainstream motion-tracking hardware to measure the overall performance of human movement based on joint axis-angle representations of limb rotation. This work describes an alternative approach to representing three-dimensional rotations using a normalized vector around which an identified joint angle defines the overall rotation, rather than a traditional Euler angle approach. Furthermore, IMUs allow for the direct measurement of joint angular velocities, offering the opportunity to increase the accuracy of instantaneous axis of rotation estimations. Although the axis-angle representation requires vector quotient algebra (quaternions) to define rotation, this approach may be preferred for many graphics, vision, and virtual reality software applications. The analytical method was validated with laboratory data gathered from an infant dummy leg’s flexion and extension knee movements and applied to a living subject’s upper limb movement. The results showed that the novel approach could reasonably handle a simple case and provide a detailed analysis of axis-angle migration. The described algorithm could play a notable role in the biomechanical analysis of human joints and offers a harbinger of IMU-based biosensors that may detect pathological patterns of joint disease and injury.

     
    more » « less