Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Inferring emotions from others’ non-verbal behavior is a pervasive and fundamental task in social interactions. Typically, real-life encounters imply the co-location of interactants, i.e., their embodiment within a shared spatial-temporal continuum in which the trajectories of the interaction partner’s Expressive Body Movement (EBM) create mutual social affordances. Shared Virtual Environments (SVEs) and Virtual Characters (VCs) are increasingly used to study social perception, allowing to reconcile experimental stimulus control with ecological validity. However, it remains unclear whether display modalities that enable co-presence have an impact on observers responses to VCs’ expressive behaviors. Drawing upon ecological approaches to social perception, we reasoned that sharing the space with a VC should amplify affordances as compared to a screen display, and consequently alter observers’ perceptions of EBM in terms of judgment certainty, hit rates, perceived expressive qualities (arousal and valence), and resulting approach and avoidance tendencies. In a between-subject design, we compared the perception of 54 10-s animations of VCs performing three daily activities (painting, mopping, sanding) in three emotional states (angry, happy, sad)—either displayed in 3D as a co-located VC moving in shared space, or as a 2D replay on a screen that was also placed in the SVEs. Results confirm the effective experimental control of the variable of interest, showing that perceived co-presence was significantly affected by the display modality, while perceived realism and immersion showed no difference. Spatial presence and social presence showed marginal effects. Results suggest that the display modality had a minimal effect on emotion perception. A weak effect was found for the expression “happy,” for which unbiased hit rates were higher in the 3D condition. Importantly, low hit rates were observed for all three emotion categories. However, observers judgments significantly correlated for category assignment and across all rating dimensions, indicating universal decoding principles. While category assignment was erroneous, though, ratings of valence and arousal were consistent with expectations derived from emotion theory. The study demonstrates the value of animated VCs in emotion perception studies and raises new questions regarding the validity of category-based emotion recognition measures.more » « less
-
Abstract. Event-related potentials (ERPs) capture neural responses to media stimuli with a split-second resolution, opening the door to examining how attention modulates the reception process. However, the relatively high cost and difficulty of incorporating ERP methods have prevented broader adoption. This study tested the potential of a new mobile, relatively easy-to-mount, and highly affordable device for electroencephalography (EEG) measurement – the Muse EEG system – combined with a free, open-source platform for ERP recording and analysis. Specifically, we compared ERPs with affective visual stimuli – representative of the kind of engaging content that pervades modern social media. Our results confirm that the Muse system provides robust visual ERPs, highly reliable across two samples. Although there was no difference between ERPs to moderately positive and neutral stimuli in the expected time windows (200–300 ms, 400–600 ms), an exploratory analysis provided some evidence for differential processing of positive versus neutral images at the right temporal sensor site (TP10). Additionally, a compliance-gaining manipulation in participant instructions significantly improved data quality. These results support the use of the Muse EEG system in large-scale studies examining brain responses to screen media. They also suggest an easy social influence tactic that can enhance data quality as communication neuroscience is scaled up. The availability of a mobile EEG system for 250 USD makes it possible to incorporate neuroimaging into various communication paradigms beyond visual communication.more » « less
-
null (Ed.)The current paper addresses two methodological problems pertinent to the analysis of observer studies in nonverbal rapport and beyond. These problems concern: (1) the production of standardized stimulus materials that allow for unbiased observer ratings and (2) the objective measurement of nonverbal behaviors to identify the dyadic patterns underlying the observer impressions. We suggest motion capture and character animation as possible solutions to these problems and exemplarily apply the novel methodology to the study of gender and cultural differences in nonverbal rapport. We compared a Western, individualistic culture with an egalitarian gender-role conception (Germany) and a collectivistic culture with a more traditional gender role conceptions (Middle East, Gulf States). Motion capture data were collected for five male and five female dyadic interactions in each culture. Character animations based on the motion capture data served as stimuli in the observation study. Female and male observers from both cultures rated the perceived rapport continuously while watching the 1 min sequences and guessed gender and cultural background of the dyads after each clip. Results show that masking of gender and culture in the stimuli was successful, as hit rates for both aspects remained at chance level. Further the results revealed high levels of agreement in the rapport ratings across gender and culture, pointing to universal judgment policies. A 2 × 2 × 2 × 2 ANOVA for gender and culture of stimuli and observers showed that female dyads were rated significantly higher on rapport across the board and that the contrast between female and male dyads was more pronounced in the Arab sample as compared to the German sample. nonverbal parameters extracted from the motion capture protocols were submitted to a series of algorithms to identify dyadic activity levels and coordination patterns relevant to the perception of rapport. The results are critically discussed with regard to the role of nonverbal coordination as a constituent of rapport.more » « less
An official website of the United States government
