skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Analyzing Interactions in Paired Egocentric Videos
As wearable devices become more popular, ego-centric information recorded with these devices can be used to better understand the behaviors of the wearer and other people the wearer is interacting with. Data such as the voice, head movement, galvanic skin responses (GSR) to measure arousal levels, etc., obtained from such devices can provide a window into the underlying affect of both the wearer and his/her conversant. In this study, we examine the characteristics of two types of dyadic conversations. In one case, the interlocutors discuss a topic on which they agree, while the other situation involves interlocutors discussing a topic on which they disagree, even if they are friends. The range of topics is mostly politically motivated. The egocentric information is collected using a pair of wearable smart glasses for video data and a smart wristband for physiological data, including GSR. Using this data, various features are extracted including the facial expressions of the conversant and the 3D motion from the wearer's camera within the environment - this motion is termed as egomotion. The goal of this work is to investigate whether the nature of a discussion could be better determined either by evaluating the behavior of an individual in the conversation or by evaluating the pairing/coupling of the behaviors of the two people in the conversation. The pairing is accomplished using a modified formulation of the dynamic time warping (DTW) algorithm. A random forest classifier is implemented to evaluate the nature of the interaction (agreement versus disagreement) using individualistic and paired features separately. The study found that in the presence of the limited data used in this work, individual behaviors were slightly more indicative of the type of discussion (85.43% accuracy) than the paired behaviors (83.33% accuracy).  more » « less
Award ID(s):
2223507
PAR ID:
10493847
Author(s) / Creator(s):
; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE International Conference on Automatic Face and Gesture Recognition (FG)
ISBN:
979-8-3503-4544-5
Page Range / eLocation ID:
1 to 7
Format(s):
Medium: X
Location:
Waikoloa Beach, HI, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Peterson, G; Shenoi, S (Ed.)
    The Spectacles wearable smart glasses device from Snapchat records snaps and videos for the Snapchat service. A Spectacles device can sync data with a paired smartphone and upload recorded content to a user’s online account. However, extracting and analyzing data from a Snapchat app is challenging due to the disappearing nature of the media. Very few commercial tools are available to obtain data from Snapchat apps. This chapter focuses on the extraction and analysis of artifacts from Snapchat and, specifically, Spectacles devices paired with Apple iPhones. A methodology is presented for forensically imaging Apple iPhones before and after critical points in the Spectacles and Snapchat pairing and syncing processes. The forensic images are examined to reveal the effects of each step of the pairing process. Several photos, videos, thumbnails and metadata files originating from Spectacles devices were obtained and tied to specific times, devices and locations. The research provides interesting insights into evidence collection from Spectacles devices paired with Apple iPhones. 
    more » « less
  2. null (Ed.)
    Augmentative and alternative communication (AAC) devices enable speech-based communication. However, AAC devices do not support nonverbal communication, which allows people to take turns, regulate conversation dynamics, and express intentions. Nonverbal communication requires motion, which is often challenging for AAC users to produce due to motor constraints. In this work, we explore how socially assistive robots, framed as ''sidekicks,'' might provide augmented communicators (ACs) with a nonverbal channel of communication to support their conversational goals. We developed and conducted an accessible co-design workshop that involved two ACs, their caregivers, and three motion experts. We identified goals for conversational support, co-designed prototypes depicting possible sidekick forms, and enacted different sidekick motions and behaviors to achieve speakers' goals. We contribute guidelines for designing sidekicks that support ACs according to three key parameters: attention, precision, and timing. We show how these parameters manifest in appearance and behavior and how they can guide future designs for augmented nonverbal communication. 
    more » « less
  3. Background Internet data can be used to improve infectious disease models. However, the representativeness and individual-level validity of internet-derived measures are largely unexplored as this requires ground truth data for study. Objective This study sought to identify relationships between Web-based behaviors and/or conversation topics and health status using a ground truth, survey-based dataset. Methods This study leveraged a unique dataset of self-reported surveys, microbiological laboratory tests, and social media data from the same individuals toward understanding the validity of individual-level constructs pertaining to influenza-like illness in social media data. Logistic regression models were used to identify illness in Twitter posts using user posting behaviors and topic model features extracted from users’ tweets. Results Of 396 original study participants, only 81 met the inclusion criteria for this study. Of these participants’ tweets, we identified only two instances that were related to health and occurred within 2 weeks (before or after) of a survey indicating symptoms. It was not possible to predict when participants reported symptoms using features derived from topic models (area under the curve [AUC]=0.51; P=.38), though it was possible using behavior features, albeit with a very small effect size (AUC=0.53; P≤.001). Individual symptoms were also generally not predictable either. The study sample and a random sample from Twitter are predictably different on held-out data (AUC=0.67; P≤.001), meaning that the content posted by people who participated in this study was predictably different from that posted by random Twitter users. Individuals in the random sample and the GoViral sample used Twitter with similar frequencies (similar @ mentions, number of tweets, and number of retweets; AUC=0.50; P=.19). Conclusions To our knowledge, this is the first instance of an attempt to use a ground truth dataset to validate infectious disease observations in social media data. The lack of signal, the lack of predictability among behaviors or topics, and the demonstrated volunteer bias in the study population are important findings for the large and growing body of disease surveillance using internet-sourced data. 
    more » « less
  4. The development of digital instruments for mental health monitoring using biosensor data from wearable devices can enable remote, longitudinal, and objective quantitative benchmarks. To survey developments and trends in this field, we conducted a systematic review of artificial intelligence (AI) models using data from wearable biosensors to predict mental health conditions and symptoms. Following PRISMA guidelines, we identified 48 studies using a variety of wearable and smartphone biosensors including heart rate, heart rate variability (HRV), electrodermal activity/galvanic skin response (EDA/GSR), and digital proxies for biosignals such as accelerometry, location, audio, and usage metadata. We observed several technical and methodological challenges across studies in this field, including lack of ecological validity, data heterogeneity, small sample sizes, and battery drainage issues. We outline several corresponding opportunities for advancement in the field of AI-driven biosensing for mental health. 
    more » « less
  5. Trunk exoskeletons are wearable devices that support wearers during physically demanding tasks by reducing biomechanical loads and increasing stability. In this paper, we present a prototype sensorized passive trunk exoskeleton, which includes five motion processing units (3-axis accelerometers and gyroscopes with onboard digital processing), four one-axis flex sensors along the exoskeletal spinal column, and two one-axis force sensors for measuring the interaction force between the wearer and exoskeleton. A pilot evaluation of the exoskeleton was conducted with two wearers, who performed multiple everyday tasks (sitting on a chair and standing up, walking in a straight line, picking up a box with a straight back, picking up a box with a bent back, bending forward while standing, bending laterally while standing) while wearing the exoskeleton. Illustrative examples of the results are presented as graphs. Finally, potential applications of the sensorized exoskeleton as the basis for a semi-active exoskeleton design or for audio/haptic feedback to guide the wearer are discussed. 
    more » « less