Much prior work on creating social agents that assist users relies on preconceived assumptions of what it means to be helpful. For example, it is common to assume that a helpful agent just assists with achieving a user’s objective. However, as assistive agents become more widespread, human-agent interactions may be more ad-hoc, providing opportunities for unexpected agent assistance. How would this affect human notions of an agent’s helpfulness? To investigate this question, we conducted an exploratory study (N=186) where participants interacted with agents displaying unexpected, assistive behaviors in a Space Invaders game and we studied factors that may influence perceived helpfulness in these interactions. Our results challenge the idea that human perceptions of the helpfulness of unexpected agent assistance can be derived from a universal, objective definition of help. Also, humans will reciprocate unexpected assistance, but might not always consider that they are in fact helping an agent. Based on our findings, we recommend considering personalization and adaptation when designing future assistive behaviors for prosocial agents that may try to help users in unexpected situations. 
                        more » 
                        « less   
                    
                            
                            Nonverbal Human Signals Can Help Autonomous Agents Infer Human Preferences for Their Behavior
                        
                    
    
            An overarching goal of Artificial Intelligence (AI) is creating autonomous, social agents that help people. Two important challenges, though, are that different people prefer different assistance from agents and that preferences can change over time. Thus, helping behaviors should be tailored to how an individual feels during the interaction. We hypothesize that human nonverbal behavior can give clues about users' preferences for an agent's helping behaviors, augmenting an agent's ability to computationally predict such preferences with machine learning models. To investigate our hypothesis, we collected data from 194 participants via an online survey in which participants were recorded while playing a multiplayer game. We evaluated whether the inclusion of nonverbal human signals, as well as additional context (e.g., via game or personality information), led to improved prediction of user preferences between agent behaviors compared to explicitly provided survey responses. Our results suggest that nonverbal communication -- a common type of human implicit feedback -- can aid in understanding how people want computational agents to interact with them. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2106690
- PAR ID:
- 10474196
- Publisher / Repository:
- ACM
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Motivational agents are virtual agents that seek to motivate users by providing feedback and guidance. Prior work has shown how certain factors of an agent, such as the type of feedback given or the agent’s appearance, can influence user motivation when completing tasks. However, it is not known how nonverbal mirroring affects an agent’s ability to motivate users. Specifically, would an agent that mirrors be more motivating than an agent that does not? Would an agent trained on real human behaviors be better? We conducted a within-subjects study asking 30 participants to play a “find-the-hidden-object” game while interacting with a motivational agent that would provide hints and feedback on the user’s performance. We created three agents: a Control agent that did not respond to the user’s movements, a simple Mimic agent that mirrored the user’s movements on a delay, and a Complex agent that used a machine-learned behavior model. We asked participants to complete a questionnaire asking them to rate their levels of motivation and perceptions of the agent and its feedback. Our results showed that the Mimic agent was more motivating than the Control agent and more helpful than the Complex agent. We also found that when participants became aware of the mimicking behavior, it can feel weird or creepy; therefore, it is important to consider the detection of mimicry when designing virtual agents.more » « less
- 
            Explanations of AI Agents' actions are considered to be an important factor in improving users' trust in the decisions made by autonomous AI systems. However, as these autonomous systems evolve from reactive, i.e., acting on user input, to proactive, i.e., acting without requiring user intervention, there is a need to explore how the explanation for the actions of these agents should evolve. In this work, we explore the design of explanations through participatory design methods for a proactive auto-response messaging agent that can reduce perceived obligations and social pressure to respond quickly to incoming messages by providing unavailability-related context. We recruited 14 participants who worked in pairs during collaborative design sessions where they reasoned about the agent's design and actions. We qualitatively analyzed the data collected through these sessions and found that participants' reasoning about agent actions led them to speculate heavily on its design. These speculations significantly influenced participants' desire for explanations and the controls they sought to inform the agents' behavior. Our findings indicate a need to transform users' speculations into accurate mental models of agent design. Further, since the agent acts as a mediator in human-human communication, it is also necessary to account for social norms in its explanation design. Finally, user expertise in understanding their habits and behaviors allows the agent to learn from the user their preferences when justifying its actions.more » « less
- 
            Social VR has increased in popularity due to its affordances for rich, embodied, and nonverbal communication. However, nonverbal communication remains inaccessible for blind and low vision people in social VR. We designed accessible cues with audio and haptics to represent three nonverbal behaviors: eye contact, head shaking, and head nodding. We evaluated these cues in real-time conversation tasks where 16 blind and low vision participants conversed with two other users in VR. We found that the cues were effective in supporting conversations in VR. Participants had statistically significantly higher scores for accuracy and confidence in detecting attention during conversations with the cues than without. We also found that participants had a range of preferences and uses for the cues, such as learning social norms. We present design implications for handling additional cues in the future, such as the challenges of incorporating AI. Through this work, we take a step towards making interpersonal embodied interactions in VR fully accessible for blind and low vision people.more » « less
- 
            People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted. To help people appropriately rely on AI aids, we propose showing them behavior descriptions, details of how AI systems perform on subgroups of instances. We tested the efficacy of behavior descriptions through user studies with 225 participants in three distinct domains: fake review detection, satellite image classification, and bird classification. We found that behavior descriptions can increase human-AI accuracy through two mechanisms: helping people identify AI failures and increasing people's reliance on the AI when it is more accurate. These findings highlight the importance of people's mental models in human-AI collaboration and show that informing people of high-level AI behaviors can significantly improve AI-assisted decision making.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    