skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Understanding the visual perception of awkward body movements: How interactions go awry
Dyadic interactions can sometimes elicit a disconcerting response from viewers, generating a sense of “awkwardness.” Despite the ubiquity of awkward social interactions in daily life, it remains unknown what visual cues signal the oddity of human interactions and yield the subjective impression of awkwardness. In the present experiments, we focused on a range of greeting behaviors (handshake, fist bump, high five) to examine both the inherent objectivity and impact of contextual and kinematic information in the social evaluation of awkwardness. In Experiment 1, participants were asked to discriminate whether greeting behaviors presented in raw videos were awkward or natural, and if judged as awkward, participants provided verbal descriptions regarding the awkward greeting behaviors. Participants showed consensus in judging awkwardness from raw videos, with a high proportion of congruent responses across a range of awkward greeting behaviors. We also found that people used social-related and motor-related words in their descriptions for awkward interactions. Experiment 2 employed advanced computer vision techniques to present the same greeting behaviors in three different display types. All display types preserved kinematic information, but varied contextual information: (1) patch displays presented blurred scenes composed of patches; (2) body displays presented human body figures on a black background; and (3) skeleton displays presented skeletal figures of moving bodies. Participants rated the degree of awkwardness of greeting behaviors. Across display types, participants consistently discriminated awkward and natural greetings, indicating that the kinematics of body movements plays an important role in guiding awkwardness judgments. Multidimensional scaling analysis based on the similarity of awkwardness ratings revealed two primary cues: motor coordination (which accounted for most of the variability in awkwardness judgments) and social coordination. We conclude that the perception of awkwardness, while primarily inferred on the basis of kinematic information, is additionally affected by the perceived social coordination underlying human greeting behaviors.  more » « less
Award ID(s):
1655300
PAR ID:
10166378
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Attention, Perception, & Psychophysics
ISSN:
1943-3921
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Human–exoskeleton interactions have the potential to bring about changes in human behavior for physical rehabilitation or skill augmentation. Despite significant advances in the design and control of these robots, their application to human training remains limited. The key obstacles to the design of such training paradigms are the prediction of human–exoskeleton interaction effects and the selection of interaction control to affect human behavior. In this article, we present a method to elucidate behavioral changes in the human–exoskeleton system and identify expert behaviors correlated with a task goal. Specifically, we observe the joint coordinations of the robot, also referred to as kinematic coordination behaviors, that emerge from human–exoskeleton interaction during learning. We demonstrate the use of kinematic coordination behaviors with two task domains through a set of three human-subject studies. We find that participants (1) learn novel tasks within the exoskeleton environment, (2) demonstrate similarity of coordination during successful movements within participants, (3) learn to leverage these coordination behaviors to maximize success within participants, and (4) tend to converge to similar coordinations for a given task strategy across participants. At a high level, we identify task-specific joint coordinations that are used by different experts for a given task goal. These coordinations can be quantified by observing experts and the similarity to these coordinations can act as a measure of learning over the course of training for novices. The observed expert coordinations may further be used in the design of adaptive robot interactions aimed at teaching a participant the expert behaviors. 
    more » « less
  2. null (Ed.)
    Objective We controlled participants’ glance behavior while using head-down displays (HDDs) and head-up displays (HUDs) to isolate driving behavioral changes due to use of different display types across different driving environments. Background Recently, HUD technology has been incorporated into vehicles, allowing drivers to, in theory, gather display information without moving their eyes away from the road. Previous studies comparing the impact of HUDs with traditional displays on human performance show differences in both drivers’ visual attention and driving performance. Yet no studies have isolated glance from driving behaviors, which limits our ability to understand the cause of these differences and resulting impact on display design. Method We developed a novel method to control visual attention in a driving simulator. Twenty experienced drivers sustained visual attention to in-vehicle HDDs and HUDs while driving in both a simple straight and empty roadway environment and a more realistic driving environment that included traffic and turns. Results In the realistic environment, but not the simpler environment, we found evidence of differing driving behaviors between display conditions, even though participants’ glance behavior was similar. Conclusion Thus, the assumption that visual attention can be evaluated in the same way for different types of vehicle displays may be inaccurate. Differences between driving environments bring the validity of testing HUDs using simplistic driving environments into question. Application As we move toward the integration of HUD user interfaces into vehicles, it is important that we develop new, sensitive assessment methods to ensure HUD interfaces are indeed safe for driving. 
    more » « less
  3. null (Ed.)
    The human ability to use different tools demonstrates our capability of forming and maintaining multiple, context-specific motor memories. Experimentally, this has been investigated in dual adaptation, where participants adjust their reaching movements to opposing visuomotor transformations. Adaptation in these paradigms occurs by distinct processes, such as strategies for each transformation or the implicit acquisition of distinct visuomotor mappings. Although distinct, transformation-dependent aftereffects have been interpreted as support for the latter, they could reflect adaptation of a single visuomotor map, which is locally adjusted in different regions of the workspace. Indeed, recent studies suggest that explicit aiming strategies direct where in the workspace implicit adaptation occurs, thus potentially serving as a cue to enable dual adaptation. Disentangling these possibilities is critical to understanding how humans acquire and maintain motor memories for different skills and tools. We therefore investigated generalization of explicit and implicit adaptation to untrained movement directions after participants practiced two opposing cursor rotations, which were associated with the visual display being presented in the left or right half of the screen. Whereas participants learned to compensate for opposing rotations by explicit strategies specific to this visual workspace cue, aftereffects were not cue sensitive. Instead, aftereffects displayed bimodal generalization patterns that appeared to reflect locally limited learning of both transformations. By varying target arrangements and instructions, we show that these patterns are consistent with implicit adaptation that generalizes locally around movement plans associated with opposing visuomotor transformations. Our findings show that strategies can shape implicit adaptation in a complex manner. NEW & NOTEWORTHY Visuomotor dual adaptation experiments have identified contextual cues that enable learning of separate visuomotor mappings, but the underlying representations of learning are unclear. We report that visual workspace separation as a contextual cue enables the compensation of opposing cursor rotations by a combination of explicit and implicit processes: Learners developed context-dependent explicit aiming strategies, whereas an implicit visuomotor map represented dual adaptation independent from arbitrary context cues by local adaptation around the explicit movement plan. 
    more » « less
  4. null (Ed.)
    Abstract Humans can operate a variety of modern tools, which are often associated with different visuomotor transformations. Studies investigating this ability have shown that separate motor memories can be acquired implicitly when different sensorimotor transformations are associated with distinct (intended) postures or explicitly when abstract contextual cues are leveraged by aiming strategies. It still remains unclear how different transformations are remembered implicitly when postures are similar. We investigated whether features of planning to manipulate a visual tool, such as its visual identity or the environmental effect intended by its use (i.e. action effect) would enable implicit learning of opposing visuomotor rotations. Results show that neither contextual cue led to distinct implicit motor memories, but that cues only affected implicit adaptation indirectly through generalization around explicit strategies. In contrast, a control experiment where participants practiced opposing transformations with different hands did result in contextualized aftereffects differing between hands across generalization targets. It appears that different (intended) body states are necessary for separate aftereffects to emerge, suggesting that the role of sensory prediction error-based adaptation may be limited to the recalibration of a body model, whereas establishing separate tool models may proceed along a different route. 
    more » « less
  5. Touch plays a vital role in maintaining human relationships through social and emotional communication. The proposed haptic display prototype generates stimuli in vibrotactile and thermal modalities toward simulating social touch cues between remote users. High-dimensional spatiotemporal vibrotactile-thermal (vibrothermal) patterns were evaluated with ten participants. The device can be wirelessly operated to enable remote communication. In the future, such patterns can be used to richly simulate social touch cues. A research study was conducted in two parts: first, the identification accuracy of vibrothermal patterns was explored; and second, the relatability of vibrothermal patterns to social touch experienced during social interactions was evaluated. Results revealed that while complex patterns were difficult to identify, simpler patterns, such as SINGLE TAP and HOLD, were highly identifiable and highly relatable to social touch cues. Directional patterns were less identifiable and less relatable to the social touch cues experienced during social interaction. 
    more » « less