Motivational agents are virtual agents that seek to motivate users by providing feedback and guidance. Prior work has shown how certain factors of an agent, such as the type of feedback given or the agent’s appearance, can influence user motivation when completing tasks. However, it is not known how nonverbal mirroring affects an agent’s ability to motivate users. Specifically, would an agent that mirrors be more motivating than an agent that does not? Would an agent trained on real human behaviors be better? We conducted a within-subjects study asking 30 participants to play a “find-the-hidden-object” game while interacting with a motivational agent that would provide hints and feedback on the user’s performance. We created three agents: a Control agent that did not respond to the user’s movements, a simple Mimic agent that mirrored the user’s movements on a delay, and a Complex agent that used a machine-learned behavior model. We asked participants to complete a questionnaire asking them to rate their levels of motivation and perceptions of the agent and its feedback. Our results showed that the Mimic agent was more motivating than the Control agent and more helpful than the Complex agent. We also found that when participants became aware of the mimicking behavior, it can feel weird or creepy; therefore, it is important to consider the detection of mimicry when designing virtual agents.
more »
« less
Perceptions of the Helpfulness of Unexpected Agent Assistance
Much prior work on creating social agents that assist users relies on preconceived assumptions of what it means to be helpful. For example, it is common to assume that a helpful agent just assists with achieving a user’s objective. However, as assistive agents become more widespread, human-agent interactions may be more ad-hoc, providing opportunities for unexpected agent assistance. How would this affect human notions of an agent’s helpfulness? To investigate this question, we conducted an exploratory study (N=186) where participants interacted with agents displaying unexpected, assistive behaviors in a Space Invaders game and we studied factors that may influence perceived helpfulness in these interactions. Our results challenge the idea that human perceptions of the helpfulness of unexpected agent assistance can be derived from a universal, objective definition of help. Also, humans will reciprocate unexpected assistance, but might not always consider that they are in fact helping an agent. Based on our findings, we recommend considering personalization and adaptation when designing future assistive behaviors for prosocial agents that may try to help users in unexpected situations.
more »
« less
- Award ID(s):
- 2106690
- PAR ID:
- 10380039
- Date Published:
- Journal Name:
- Proceedings of 10th International Conference on Human-Agent Interaction (HAI)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
An overarching goal of Artificial Intelligence (AI) is creating autonomous, social agents that help people. Two important challenges, though, are that different people prefer different assistance from agents and that preferences can change over time. Thus, helping behaviors should be tailored to how an individual feels during the interaction. We hypothesize that human nonverbal behavior can give clues about users' preferences for an agent's helping behaviors, augmenting an agent's ability to computationally predict such preferences with machine learning models. To investigate our hypothesis, we collected data from 194 participants via an online survey in which participants were recorded while playing a multiplayer game. We evaluated whether the inclusion of nonverbal human signals, as well as additional context (e.g., via game or personality information), led to improved prediction of user preferences between agent behaviors compared to explicitly provided survey responses. Our results suggest that nonverbal communication -- a common type of human implicit feedback -- can aid in understanding how people want computational agents to interact with them.more » « less
-
Prior work has shown that embodiment can benefit virtual agents, such as increasing rapport and conveying non-verbal information. However, it is unclear if users prefer an embodied to a speech-only agent for augmented reality (AR) headsets that are designed to assist users in completing real-world tasks. We conducted a study to examine users' perceptions and behaviors when interacting with virtual agents in AR. We asked 24 adults to wear the Microsoft HoloLens and find objects in a hidden object game while interacting with an agent that would offer assistance. We presented participants with four different agents: voice-only, non-human, full-size embodied, and a miniature embodied agent. Overall, users preferred the miniature embodied agent due to the novelty of his size and reduced uncanniness as opposed to the larger agent. From our results, we draw conclusions about how agent representation matters and derive guidelines on designing agents for AR headsets.more » « less
-
Prior research has highlighted users’ preferences for embodiment when interacting with virtual agents in augmented reality headsets. However, open questions remain regarding users’ preferences towards agent placement and gaze direction. In our study, we asked 48 adults to wear the Microsoft HoloLens 2 and find objects in a hidden object game with the help of embodied agents. We examined four distinct agent configurations for both male and female agents: a human-size agent standing beside participants, a human-size agent sitting beside participants, a small desk agent facing the screen, and a small desk agent facing the participant. Overall, participants preferred male over female virtual agents when receiving assistance, and no consistent preference emerged regarding the agents’ position or gaze direction. From our results, we build upon existing guidelines for designing better virtual agents for AR with headsets.more » « less
-
The presence of voice activated personal assistants (VAPAs) in people's homes rises each year [31]. Industry efforts are invested in making interactions with VAPAs more personal by leveraging information from messages and calendars, and by accessing user accounts for 3rd party services. However, the use of personal data becomes more complicated in interpersonal spaces, such as people's homes. Should a shared agent access the information of many users? If it does, how should it navigate issues of privacy and control? Designers currently lack guidelines to help them design appropriate agent behaviors. We used Speed Dating to explore inchoate social mores around agent actions within a home, including issues of proactivity, interpersonal conflict, and agent prevarication. Findings offer new insights on how more socially sophisticated agents might sense, make judgements about, and navigate social roles and individuals. We discuss how our findings might impact future research and future agent behaviors.more » « less
An official website of the United States government

