We explored how people perceived animations created by artificial intelligence (AI)-driven motion capture, manual keyframe technique, and AI-driven motion capture with manual cleanup methods. We presented our participants with short, full-body animation clips created using the three methods. Participants rated the appeal and naturalness of the animations, and we asked them to discern the creation method. Results revealed differences in perceived appeal and naturalness between manually created animations and those generated through AI-based methods, with manual animations consistently rated higher in appeal and naturalness. However, participants could not discern creation methods regardless of animation experience level, demonstrating an accuracy equivalent to random guessing. The qualitative analysis highlighted diverse perspectives with negative and positive views on AI use, with the most mentioned theme being the importance of quality regardless of creation method. The overwhelming majority of participants asserted that the degree of automatization would influence participants’ perceived value and effort put into an animation. Still, this group did not show divergent ratings, nor did it affect their overall agreeableness towards using AI in creative fields. This study contributes insights into the intersection of animation and AI, informing creators about the effect of different creation methods on audience perceptions.
more »
« less
Perceived Naturalness of Interpolation Methods for Character Upper Body Animation.
We compare the perceived naturalness of character animations generated using three interpolation methods: linear Euler, spherical linear quaternion, and spherical spline quaternion. While previous work focused on the mathematical description of these interpolation types, our work studies the perceptual evaluation of animated upper body character gestures generated using these interpolations. Ninety-seven participants watched 12 animation clips of a character performing four different upper body motions: a beat gesture, a deictic gesture, an iconic gesture, and a metaphoric gesture. Three animation clips were generated for each gesture using the three interpolation methods. The participants rated their naturalness on a 5-point Likert scale. The results showed that animations generated using spherical spline quaternion interpolation were perceived as significantly more natural than those generated using the other two interpolation methods. The findings held true for all subjects regardless of gender and animation experience and across all four gestures.
more »
« less
- Award ID(s):
- 1821894
- PAR ID:
- 10341368
- Date Published:
- Journal Name:
- Advances in Visual Computing. ISVC 2021. Lecture Notes in Computer Science
- Volume:
- 13017
- Page Range / eLocation ID:
- 103–115
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The goal of this research is to develop Animated Pedagogical Agents (APA) that can convey clearly perceivable emotions through speech, facial expressions and body gestures. In particular, the two studies reported in the paper investigated the extent to which modifications to the range of movement of 3 beat gestures, e.g., both arms synchronous outward gesture, both arms synchronous forward gesture, and upper body lean, and the agent‘s gender have significant effects on viewer’s perception of the agent’s emotion in terms of valence and arousal. For each gesture the range of movement was varied at 2 discrete levels. The stimuli of the studies were two sets of 12-s animation clips generated using fractional factorial designs; in each clip an animated agent who speaks and gestures, gives a lecture segment on binomial probability. 50% of the clips featured a female agent and 50% of the clips featured a male agent. In the first study, which used a within-subject design and metric conjoint analysis, 120 subjects were asked to watch 8 stimuli clips and rank them according to perceived valence and arousal (from highest to lowest). In the second study, which used a between-subject design, 300 participants were assigned to two groups of 150 subjects each. One group watched 8 clips featuring the male agent and one group watched 8 clips featuring the female agent. Each participant was asked to rate perceived valence and arousal for each clip using a 7-point Likert scale. Results from the two studies suggest that the more open and forward the gestures the agent makes, the higher the perceived valence and arousal. Surprisingly, agents who lean their body forward more are not perceived as having higher arousal and valence. Findings also show that female agents’ emotions are perceived as having higher arousal and more positive valence that male agents’ emotions.more » « less
-
The overall goal of our research is to develop a system of intelligent multimodal affective pedagogical agents that are effective for different types of learners (Adamo et al., 2021). While most of the research on pedagogical agents tends to focus on the cognitive aspects of online learning and instruction, this project explores the less-studied role of affective (or emotional) factors. We aim to design believable animated agents that can convey realistic, natural emotions through speech, facial expressions, and body gestures and that can react to the students’ detected emotional states with emotional intelligence. Within the context of this goal, the specific objective of the work reported in the paper was to examine the extent to which the agents’ facial micro-expressions affect students’ perception of the agents’ emotions and their naturalness. Micro-expressions are very brief facial expressions that occur when a person either deliberately or unconsciously conceals an emotion being felt (Ekman &Friesen, 1969). Our assumption is that if the animated agents display facial micro expressions in addition to macro expressions, they will convey higher expressive richness and naturalness to the viewer, as “the agents can possess two emotional streams, one based on interaction with the viewer and the other based on their own internal state, or situation” (Queiroz et al. 2014, p.2).The work reported in the paper involved two studies with human subjects. The objectives of the first study were to examine whether people can recognize micro-expressions (in isolation) in animated agents, and whether there are differences in recognition based on the agent’s visual style (e.g., stylized versus realistic). The objectives of the second study were to investigate whether people can recognize the animated agents’ micro-expressions when integrated with macro-expressions, the extent to which the presence of micro + macro-expressions affect the perceived expressivity and naturalness of the animated agents, the extent to which exaggerating the micro expressions, e.g. increasing the amplitude of the animated facial displacements affects emotion recognition and perceived agent naturalness and emotional expressivity, and whether there are differences based on the agent’s design characteristics. In the first study, 15 participants watched eight micro-expression animations representing four different emotions (happy, sad, fear, surprised). Four animations featured a stylized agent and four a realistic agent. For each animation, subjects were asked to identify the agent’s emotion conveyed by the micro-expression. In the second study, 234 participants watched three sets of eight animation clips (24 clips in total, 12 clips per agent). Four animations for each agent featured the character performing macro-expressions only, four animations for each agent featured the character performing macro- + micro-expressions without exaggeration, and four animations for each agent featured the agent performing macro + micro-expressions with exaggeration. Participants were asked to recognize the true emotion of the agent and rate the emotional expressivity ad naturalness of the agent in each clip using a 5-point Likert scale. We have collected all the data and completed the statistical analysis. Findings and discussion, implications for research and practice, and suggestions for future work will be reported in the full paper. ReferencesAdamo N., Benes, B., Mayer, R., Lei, X., Meyer, Z., &Lawson, A. (2021). Multimodal Affective Pedagogical Agents for Different Types of Learners. In: Russo D., Ahram T., Karwowski W., Di Bucchianico G., Taiar R. (eds) Intelligent Human Systems Integration 2021. IHSI 2021. Advances in Intelligent Systems and Computing, 1322. Springer, Cham. https://doi.org/10.1007/978-3-030-68017-6_33Ekman, P., &Friesen, W. V. (1969, February). Nonverbal leakage and clues to deception. Psychiatry, 32(1), 88–106. https://doi.org/10.1080/00332747.1969.11023575 Queiroz, R. B., Musse, S. R., &Badler, N. I. (2014). Investigating Macroexpressions and Microexpressions in Computer Graphics Animated Faces. Presence, 23(2), 191-208. http://dx.doi.org/10.1162/more » « less
-
Soddu, Celestino; Colabella, Enrica (Ed.)Artificial Intelligence (AI) is reshaping the landscape of animation and visual effects (VFX), challenging traditional production workflows while enabling novel creative possibilities. This paper critically examines the influence of AI on animation and VFX by reviewing three recent projects that explore points of intersection between automation and artistic practice. The first study compares four facial animation methods, e.g., skeleton, blendshape, audio-driven, and visionbased capture, and finds that artistauthored methods outperform automated techniques in both naturalness and emotion recognition, underscoring the continuing value of human artistry in expressive animation. The second study investigates audience perceptions of character body animations produced through different degrees of automation— manual keyframing, fully AI-driven Motion Capture (MoCap), and AI-driven MoCap refined with human cleanup—and reveals that while viewers consistently rated manually created animations higher in perceived appeal and naturalness, they could not reliably discern which animations were generated by AI. The third project proposes a novel VFX compositing technique that integrates AIassisted colour edge extension and automated matte cleanup to streamline green-screen workflows, demonstrating how machine learning can enhance efficiency while preserving artistic control. Collectively, these projects highlight the complex interplay between automation, creativity, and perception in the emerging era of AI-driven production.more » « less
-
In this paper, the authors explore different approaches to animating 3D facial emotions, some of which use manual keyframe animation and some of which use machine learning. To compare approaches the authors conducted an experiment consisting of side-by-side comparisons of animation clips generated by skeleton, blendshape, audio-driven, and vision-based capture facial animation techniques. Ninety-five participants viewed twenty face animation clips of characters expressing five distinct emotions (anger, sadness, happiness, fear, neutral), which were created using the four different facial animation techniques. After viewing each clip, the participants were asked to identify the emotions that the characters appeared to be conveying and rate their naturalness. Findings showed that the naturalness ratings of the happy emotion produced by the four methods tended to be consistent, whereas the naturalness ratings of the fear emotion created with skeletal animation were significantly higher than the other methods. Recognition of sad and neutral emotions were very low for all methods as compared to the other emotions. Overall, the skeleton approach had significantly higher ratings for naturalness and higher recognition rate than the other methods.more » « less
An official website of the United States government

