This study investigates the presence of dynamical patterns of interpersonal coordination in extended deceptive conversations across multimodal channels of behavior. Using a novel "devil’s advocate" paradigm, we experimentally elicited deception and truth across topics in which conversational partners either agreed or disagreed, and where one partner was surreptitiously asked to argue an opinion opposite of what he or she really believed. We focus on interpersonal coordination as an emergent behavioral signal that captures interdependencies between conversational partners, both as the coupling of head movements over the span of milliseconds, measured via a windowed lagged cross correlation (WLCC) technique, and more global temporal dependencies across speech rate, using cross recurrence quantification analysis (CRQA). Moreover, we considered how interpersonal coordination might be shaped by strategic, adaptive conversational goals associated with deception. We found that deceptive conversations displayed more structured speech rate and higher head movement coordination, the latter with a peak in deceptive disagreement conversations. Together the results allow us to posit an adaptive account, whereby interpersonal coordination is not beholden to any single functional explanation, but can strategically adapt to diverse conversational demands.
more »
« less
Dynamic Mode Decomposition with Control as a Model of Multimodal Behavioral Coordination
Observing how infants and mothers coordinate their behaviors can highlight meaningful patterns in early communication and infant development. While dyads often differ in the modalities they use to communicate, especially in the first year of life, it remains unclear how to capture coordination across multiple types of behaviors using existing computational models of interpersonal synchrony. This paper explores Dynamic Mode Decomposition with control (DMDc) as a method of integrating multiple signals from each communicating partner into a model of multimodal behavioral coordination. We used an existing video dataset to track the head pose, arm pose, and vocal fundamental frequency of infants and mothers during the Face-to-Face Still-Face (FFSF) procedure, a validated 3-stage interaction paradigm. For each recorded interaction, we fit both unimodal and multimodal DMDc models to the extracted pose data. The resulting dynamic characteristics of the models were analyzed to evaluate trends in individual behaviors and dyadic processes across infant age and stages of the interactions. Results demonstrate that observed trends in interaction dynamics across stages of the FFSF protocol were stronger and more significant when models incorporated both head and arm pose data, rather than a single behavior modality. Model output showed significant trends across age, identifying changes in infant movement and in the relationship between infant and mother behaviors. Models that included mothers’ audio data demonstrated similar results to those evaluated with pose data, confirming that DMDc can leverage different sets of behavioral signals from each interacting partner. Taken together, our results demonstrate the potential of DMDc toward integrating multiple behavioral signals into the measurement of multimodal interpersonal coordination.
more »
« less
- Award ID(s):
- 1706964
- PAR ID:
- 10354821
- Date Published:
- Journal Name:
- ICMI '21: Proceedings of the 2021 International Conference on Multimodal Interaction
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Van_Den_Heuvel, M; Wass, S V (Ed.)During everyday interactions, mothers and infants achieve behavioral synchrony at multiple levels. The ebb-and-flow of mother-infant physical proximity may be a central type of synchrony that establishes a common ground for infant-mother interaction. However, the role of proximity in language exchanges is relatively unstudied, perhaps because structured tasks—the common setup for observing infant-caregiver interactions—establish proximity by design. We videorecorded 100 mothers (U.S. Hispanic N =50, U.S. Non-Hispanic N =50) and their 13- to 23-month-old infants during natural activity at home (1-to-2 h per dyad), transcribed mother and infant speech, and coded proximity continuously (i.e., infants and mother within arms reach). In both samples, dyads entered proximity in a bursty temporal pattern, with bouts of proximity interspersed with bouts of physical distance. As hypothesized, Non-Hispanic and Hispanic mothers produced more words and a greater variety of words when within arms reach than out of arms reach. Similarly, infants produced more utterances that contained words when close to mother than when not. However, infants babbled equally often regardless of proximity, generating abundant opportunities to play with sounds. Physical proximity expands opportunities for language exchanges and infants’ communicative word use, although babies accumulate massive practice babbling even when caregivers are not proximal.more » « less
-
Objective. Maternal stress is a psychological response to the demands of motherhood. A high level of maternal stress is a risk factor for maternal mental health problems, including depression and anxiety, as well as adverse infant socioemotional and cognitive outcomes. Yet, levels of maternal stress (i.e., levels of stress related to parenting) among low-risk samples are rarely studied longitudinally, particularly in the first year after birth. Design. We measured maternal stress in an ethnically diverse sample of low-risk, healthy U.S. mothers of healthy infants (N = 143) living in South Florida across six time points between 2 weeks and 14 months postpartum using the Parenting Stress Index-Short Form, capturing stress related to the mother, mother-infant interactions, and the infant. Results. Maternal distress increased as infants aged for mothers with more than one child, but not for first-time mothers whose distress levels remained low and stable across this period. Stress related to mother-infant dysfunctional interactions lessened over the first 8 months. Mothers’ stress about their infants’ difficulties decreased from 2 weeks to 6 months, and subsequently increased from 6 to 14 months. Conclusions. Our findings suggest that maternal stress is dynamic across the first year after birth. The current study adds to our understanding of typical developmental patterns in early motherhood and identifies potential domains and time points as targets for future interventions.more » « less
-
Agents must monitor their partners' affective states continuously in order to understand and engage in social interactions. However, methods for evaluating affect recognition do not account for changes in classification performance that may occur during occlusions or transitions between affective states. This paper addresses temporal patterns in affect classification performance in the context of an infant-robot interaction, where infants’ affective states contribute to their ability to participate in a therapeutic leg movement activity. To support robustness to facial occlusions in video recordings, we trained infant affect recognition classifiers using both facial and body features. Next, we conducted an in-depth analysis of our best-performing models to evaluate how performance changed over time as the models encountered missing data and changing infant affect. During time windows when features were extracted with high confidence, a unimodal model trained on facial features achieved the same optimal performance as multimodal models trained on both facial and body features. However, multimodal models outperformed unimodal models when evaluated on the entire dataset. Additionally, model performance was weakest when predicting an affective state transition and improved after multiple predictions of the same affective state. These findings emphasize the benefits of incorporating body features in continuous affect recognition for infants. Our work highlights the importance of evaluating variability in model performance both over time and in the presence of missing data when applying affect recognition to social interactions.more » « less
-
Early intervention to address developmental disability in infants has the potential to promote improved outcomes in neurodevelopmental structure and function [1]. Researchers are starting to explore Socially Assistive Robotics (SAR) as a tool for delivering early interventions that are synergistic with and enhance human-administered therapy. For SAR to be effective, the robot must be able to consistently attract the attention of the infant in order to engage the infant in a desired activity. This work presents the analysis of eye gaze tracking data from five 6-8 month old infants interacting with a Nao robot that kicked its leg as a contingent reward for infant leg movement. We evaluate a Bayesian model of lowlevel surprise on video data from the infants’ head-mounted camera and on the timing of robot behaviors as a predictor of infant visual attention. The results demonstrate that over 67% of infant gaze locations were in areas the model evaluated to be more surprising than average. We also present an initial exploration using surprise to predict the extent to which the robot attracts infant visual attention during specific intervals in the study. This work is the first to validate the surprise model on infants; our results indicate the potential for using surprise to inform robot behaviors that attract infant attention during SAR interactions.more » « less
An official website of the United States government

