Abstract The positivity principle states that people learn better from instructors who display positive emotions rather than negative emotions. In two experiments, students viewed a short video lecture on a statistics topic in which an instructor stood next to a series of slides as she lectured and then they took either an immediate test (Experiment 1) or a delayed test (Experiment 2). In a between-subjects design, students saw an instructor who used her voice, body movement, gesture, facial expression, and eye gaze to display one of four emotions while lecturing: happy (positive/active), content (positive/passive), frustrated (negative/active), or bored (negative/passive). First, learners were able to recognize the emotional tone of the instructor in an instructional video lecture, particularly by more strongly rating a positive instructor as displaying positive emotions and a negative instructor as displaying negative emotions (in Experiments 1 and 2). Second, concerning building a social connection during learning, learners rated a positive instructor as more likely to facilitate learning, more credible, and more engaging than a negative instructor (in Experiments 1 and 2). Third, concerning cognitive engagement during learning, learners reported paying more attention during learning for a positive instructor than a negative instructor (in Experiments 1 and 2). Finally, concerning learning outcome, learners who had a positive instructor scored higher than learners who had a negative instructor on a delayed posttest (Experiment 2) but not an immediate posttest (Experiment 1). Overall, there is evidence for the positivity principle and the cognitive-affective model of e-learning from which it is derived. 
                        more » 
                        « less   
                    
                            
                            The Power of Voice to Convey Emotion in Multimedia Instructional Messages
                        
                    
    
            This study examines an aspect of the role of emotion in multimedia learning, i.e., whether participants can recognize the instructor’s positive or negative emotion based on hearing short clips involving only the instructor’s voice just as well as also seeing an embodied onscreen agent. Participants viewed 16 short video clips from a statistics lecture in which an animated instructor, conveying a happy, content, frustrated, or bored emotion, stands next to a slide as she lectures (agent present) or uses only her voice (agent absent). For each clip, participants rated the instructor on five-point scales for how happy, content, frustrated, and bored the instructor seemed. First, for happy, content, and bored instructors, participants were just as accurate in rating emotional tone based on voice only as with voice plus onscreen agent. This supports the voice hypothesis, which posits that voice is a powerful source of social-emotional information. Second, participants rated happy and content instructors higher on happy and content scales and rated frustrated and bored instructors higher on frustrated and bored scales. This supports the positivity hypothesis, which posits that people are particularly sensitive to the positive or negative tone of multimedia instructional messages. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1821894
- PAR ID:
- 10341385
- Date Published:
- Journal Name:
- International journal of artificial intelligence in education
- ISSN:
- 1560-4292
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            In general, people tend to identify the emotions of others from their facial expressions, however recent findings suggest that we may be more accurate when we hear someone’s voice than when we look only at their facial expression. The study reported in the paper examined whether these findings hold true for animated agents. A total of 37 subjects participated in the study: 19 males, 14 females, and 4 of non-specified gender. Subjects were asked to view 18 video stimuli; 9 clips featured a male agent and 9 clips a female agent. Each agent showed 3 different facial expressions (happy, angry, neutral), each one paired with 3 different voice lines spoken in three different tones (happy, angry, neutral). Hence, in some clips the agent’s tone of voice and facial expression were congruent, while in some videos they were not. Subjects answered questions regarding the emotion they believed the agent was feeling and rated the emotion intensity, typicality, and sincerity. Findings showed that emotion recognition rate and ratings of emotion intensity, typicality and sincerity were highest when the agent’s face and voice were congruent. However, when the channels were incongruent, subjects identified the emotion more accurately from the agent’s facial expression than the tone of voice.more » « less
- 
            This study examined how well people can recognize and relate to animated pedagogical agents of varying ethnicities/races and genders. For both Study 1 (realistic-style agents) and Study 2 (cartoon-style agents), participants viewed brief video clips of virtual agents of varying racial/ethnic categories and gender types and then identified their race/ethnicity and gender and rated how human-like and likable the agent appeared. Participants were highly accurate in identifying Black and White agents but were less accurate for Asian, Indian, and Hispanic agents. Participants were accurate in recognizing gender differences. Participants rated all types of agents as moderately human-like, except for White agents. Likability ratings were lowest for White and male agents. The same pattern of results was obtained across two independent studies with different participants and different onscreen agents, which indicates that the results are not solely due to one specific set of agents. Consistent with the Media Equation Hypothesis and the Alliance Hypothesis, this work shows that people are sensitive to the race/ethnicity and gender of onscreen agents and relate to them differently. These findings have implications for how to design animated pedagogical agents for improved multimedia learning environments in the future and serve as a crucial first step in highlighting the possibility and feasibility of incorporating diverse onscreen virtual agents into educational computer software.more » « less
- 
            null (Ed.)The present study compares how individuals perceive gradient acoustic realizations of emotion produced by a human voice versus an Amazon Alexa text-to-speech (TTS) voice. We manipulated semantically neutral sentences spoken by both talkers with identical emotional synthesis methods, using three levels of increasing ‘happiness’ (0 %, 33 %, 66% ‘happier’). On each trial, listeners (native speakers of American English, n=99) rated a given sentence on two scales to assess dimensions of emotion: valence (negative-positive) and arousal (calm-excited). Participants also rated the Alexa voice on several parameters to assess anthropomorphism (e.g., naturalness, human-likeness, etc.). Results showed that the emotion manipulations led to increases in perceived positive valence and excitement. Yet, the effect differed by interlocutor: increasing ‘happiness’ manipulations led to larger changes for the human voice than the Alexa voice. Additionally, we observed individual differences in perceived valence/arousal based on participants’ anthropomorphism scores. Overall, this line of research can speak to theories of computer personification and elucidate our changng relationship with voice-AI technology.more » « less
- 
            The paper reports ongoing research toward the design of multimodal affective pedagogical agents that are effective for different types of learners and applications. In particular, the work reported in the paper investigated the extent to which the type of character design (realistic versus stylized) affects students’ perception of an animated agent’s facial emotions, and whether the effects are moderated by learner characteristics (e.g. gender). Eighty-two participants viewed 10 animation clips featuring a stylized character exhibiting 5 different emotions, e.g. happiness, sadness, fear, surprise and anger (2 clips per emotion), and 10 clips featuring a realistic character portraying the same emotional states. The participants were asked to name the emotions and rate their sincerity, intensity, and typicality. The results indicated that for recognition, participants were slightly more likely to recognize the emotions displayed by the stylized agent, although the difference was not statistically significant. The stylized agent was on average rated significantly higher for facial emotion intensity, whereas the differences in ratings for typicality and sincerity across all emotions were not statistically significant. A significant difference in ratings was shown in regard to sadness (within typicality), happiness (within sincerity), fear, anger, sadness and happiness (within intensity) with the stylized agent rated higher. Gender was not a significant correlate across all emotions or for individual emotions.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    