skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, June 12 until 2:00 AM ET on Friday, June 13 due to maintenance. We apologize for the inconvenience.


Title: Within-Person Temporal Associations Among Self-Reported Physical Activity, Sleep, and Well-Being in College Students
ABSTRACT ObjectiveA holistic understanding of the naturalistic dynamics among physical activity, sleep, emotions, and purpose in life as part of a system reflecting wellness is key to promoting well-being. The main aim of this study was to examine the day-to-day dynamics within this wellness system. MethodsUsing self-reported emotions (happiness, sadness, anger, anxiousness) and physical activity periods collected twice per day, and daily reports of sleep and purpose in life via smartphone experience sampling, more than 28 days as college students (n= 226 young adults; mean [standard deviation] = 20.2 [1.7] years) went about their daily lives, we examined day-to-day temporal and contemporaneous dynamics using multilevel vector autoregressive models that consider the network of wellness together. ResultsNetwork analyses revealed that higher physical activity on a given day predicted an increase of happiness the next day. Higher sleep quality on a given night predicted a decrease in negative emotions the next day, and higher purpose in life predicted decreased negative emotions up to 2 days later. Nodes with the highest centrality were sadness, anxiety, and happiness in the temporal network and purpose in life, anxiety, and anger in the contemporaneous network. ConclusionsAlthough the effects of sleep and physical activity on emotions and purpose in life may be shorter term, a sense of purpose in life is a critical component of wellness that can have slightly longer effects, bleeding into the next few days. High-arousal emotions and purpose in life are central to motivating people into action, which can lead to behavior change.  more » « less
Award ID(s):
2137511
PAR ID:
10483544
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Publisher / Repository:
LWW
Date Published:
Journal Name:
Psychosomatic Medicine
Volume:
85
Issue:
2
ISSN:
0033-3174
Page Range / eLocation ID:
141 to 153
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The paper reports ongoing research toward the design of multimodal affective pedagogical agents that are effective for different types of learners and applications. In particular, the work reported in the paper investigated the extent to which the type of character design (realistic versus stylized) affects students’ perception of an animated agent’s facial emotions, and whether the effects are moderated by learner characteristics (e.g. gender). Eighty-two participants viewed 10 animation clips featuring a stylized character exhibiting 5 different emotions, e.g. happiness, sadness, fear, surprise and anger (2 clips per emotion), and 10 clips featuring a realistic character portraying the same emotional states. The participants were asked to name the emotions and rate their sincerity, intensity, and typicality. The results indicated that for recognition, participants were slightly more likely to recognize the emotions displayed by the stylized agent, although the difference was not statistically significant. The stylized agent was on average rated significantly higher for facial emotion intensity, whereas the differences in ratings for typicality and sincerity across all emotions were not statistically significant. A significant difference in ratings was shown in regard to sadness (within typicality), happiness (within sincerity), fear, anger, sadness and happiness (within intensity) with the stylized agent rated higher. Gender was not a significant correlate across all emotions or for individual emotions. 
    more » « less
  2. This study examines the organizational dynamics of social media crowds, in particular, the influence of a crowd’s emotional expression on its solidarity. To identify the relationship between emotions expressed and solidarity, marked by sustained participation in the crowd, the study uses tweets from a unique population of crowds—those tweeting about ongoing National Football League games. Observing this population permits the use of game results as quasi-random treatments on crowds, helping to reduce confounding factors. Results indicate that participation in these crowds is self-sustaining in the medium term (1 week) and can be stimulated or suppressed by emotional expression in a short term (1 hour), depending on the discrete emotion expressed. In particular, anger encourages participation while sadness discourages it. Positive emotions and anxiety have a more nuanced relationship with participation. 
    more » « less
  3. Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals. 
    more » « less
  4. IntroductionThis study assesses the person-specific impact of extreme heat on low-income households using wearable sensors. The focus is on the intensive and longitudinal assessment of physical activity and sleep with the rising person-specific ambient temperature. MethodsThis study recruited 30 participants in a low-income and predominantly Black community in Houston, Texas in August and September of 2022. Each participant wore on his/her wrist an accelerometer that recorded person-specific ambient temperature, sedentary behavior, physical activity intensity (low and moderate to vigorous), and sleep efficiency 24 h over 14 days. Mixed effects models were used to analyze associations among physical activity, sleep, and person-specific ambient temperature. ResultsThe main findings include increased sedentary time, sleep impairment with the rise of person-level ambient temperature, and the mitigating role of AC. ConclusionsExtreme heat negatively affects physical activity and sleep. The negative consequences are especially critical for those with limited use of AC in lower-income neighborhoods of color. Staying home with a high indoor temperature during hot days can lead to various adverse health outcomes including accelerated cognitive decline, higher cancer risk, and social isolation. 
    more » « less
  5. JMIR (Ed.)
    Psychotherapy, particularly for youth, is a pressing challenge in the health care system. Traditional methods are resource-intensive, and there is a need for objective benchmarks to guide therapeutic interventions. Automated emotion detection from speech, using artificial intelligence, presents an emerging approach to address these challenges. Speech can carry vital information about emotional states, which can be used to improve mental health care services, especially when the person is suffering. ObjectiveThis study aims to develop and evaluate automated methods for detecting the intensity of emotions (anger, fear, sadness, and happiness) in audio recordings of patients’ speech. We also demonstrate the viability of deploying the models. Our model was validated in a previous publication by Alemu et al with limited voice samples. This follow-up study used significantly more voice samples to validate the previous model. MethodsWe used audio recordings of patients, specifically children with high adverse childhood experience (ACE) scores; the average ACE score was 5 or higher, at the highest risk for chronic disease and social or emotional problems; only 1 in 6 have a score of 4 or above. The patients’ structured voice sample was collected by reading a fixed script. In total, 4 highly trained therapists classified audio segments based on a scoring process of 4 emotions and their intensity levels for each of the 4 different emotions. We experimented with various preprocessing methods, including denoising, voice-activity detection, and diarization. Additionally, we explored various model architectures, including convolutional neural networks (CNNs) and transformers. We trained emotion-specific transformer-based models and a generalized CNN-based model to predict emotion intensities. ResultsThe emotion-specific transformer-based model achieved a test-set precision and recall of 86% and 79%, respectively, for binary emotional intensity classification (high or low). In contrast, the CNN-based model, generalized to predict the intensity of 4 different emotions, achieved test-set precision and recall of 83% for each. ConclusionsAutomated emotion detection from patients’ speech using artificial intelligence models is found to be feasible, leading to a high level of accuracy. The transformer-based model exhibited better performance in emotion-specific detection, while the CNN-based model showed promise in generalized emotion detection. These models can serve as valuable decision-support tools for pediatricians and mental health providers to triage youth to appropriate levels of mental health care services. 
    more » « less