Explanations of AI Agents' actions are considered to be an important factor in improving users' trust in the decisions made by autonomous AI systems. However, as these autonomous systems evolve from reactive, i.e., acting on user input, to proactive, i.e., acting without requiring user intervention, there is a need to explore how the explanation for the actions of these agents should evolve. In this work, we explore the design of explanations through participatory design methods for a proactive auto-response messaging agent that can reduce perceived obligations and social pressure to respond quickly to incoming messages by providing unavailability-related context. We recruited 14 participants who worked in pairs during collaborative design sessions where they reasoned about the agent's design and actions. We qualitatively analyzed the data collected through these sessions and found that participants' reasoning about agent actions led them to speculate heavily on its design. These speculations significantly influenced participants' desire for explanations and the controls they sought to inform the agents' behavior. Our findings indicate a need to transform users' speculations into accurate mental models of agent design. Further, since the agent acts as a mediator in human-human communication, it is also necessary to account for social norms in its explanation design. Finally, user expertise in understanding their habits and behaviors allows the agent to learn from the user their preferences when justifying its actions.
more »
« less
Designing Chatbots as Community-Owned Agents
This work investigates how social agents can be designed to create a sense of ownership over them within a group of users. Social agents, such as conversational agents and chatbots, currently interact with people in impersonal, isolated, and often one-on-one interactions: one user and one agent. This is likely to change as agents become more socially sophisticated and integrated in social fabrics. Previous research has indicated that understanding who owns an agent can assist in creating expectations and understanding who an agent is accountable to within a group. We present findings from a three week case-study in which we implemented a chatbot that was successful in creating a sense of collective ownership within a community. We discuss the design choices that led to this outcome and implications for social agent design.
more »
« less
- Award ID(s):
- 1734456
- PAR ID:
- 10275596
- Date Published:
- Journal Name:
- Proceedings of the 2nd Conference on Conversational User Interfaces
- Page Range / eLocation ID:
- 1 to 3
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The goal of this research is to develop Animated Pedagogical Agents (APA) that can convey clearly perceivable emotions through speech, facial expressions and body gestures. In particular, the two studies reported in the paper investigated the extent to which modifications to the range of movement of 3 beat gestures, e.g., both arms synchronous outward gesture, both arms synchronous forward gesture, and upper body lean, and the agent‘s gender have significant effects on viewer’s perception of the agent’s emotion in terms of valence and arousal. For each gesture the range of movement was varied at 2 discrete levels. The stimuli of the studies were two sets of 12-s animation clips generated using fractional factorial designs; in each clip an animated agent who speaks and gestures, gives a lecture segment on binomial probability. 50% of the clips featured a female agent and 50% of the clips featured a male agent. In the first study, which used a within-subject design and metric conjoint analysis, 120 subjects were asked to watch 8 stimuli clips and rank them according to perceived valence and arousal (from highest to lowest). In the second study, which used a between-subject design, 300 participants were assigned to two groups of 150 subjects each. One group watched 8 clips featuring the male agent and one group watched 8 clips featuring the female agent. Each participant was asked to rate perceived valence and arousal for each clip using a 7-point Likert scale. Results from the two studies suggest that the more open and forward the gestures the agent makes, the higher the perceived valence and arousal. Surprisingly, agents who lean their body forward more are not perceived as having higher arousal and valence. Findings also show that female agents’ emotions are perceived as having higher arousal and more positive valence that male agents’ emotions.more » « less
-
In this paper, we consider a general distributed system with multiple agents who select and then implement actions in the system. The system has an operator with a centralized objective. The agents, on the other hand, are selfinterested and strategic in the sense that each agent optimizes its own individual objective. The operator aims to mitigate this misalignment by designing an incentive scheme for the agents. The problem is difficult due to the cost functions of the agents being coupled, the objective of the operator not being social welfare, and the operator having no direct control over actions being implemented by the agents. This problem has been studied in many fields, particularly in mechanism design and cost allocation. However, mechanism design typically assumes that the operator has knowledge of the cost functions of the agents and the actions being implemented by the operator. On the other hand, cost allocation classically assumes that agents do not anticipate the effect of their actions on the incentive that they obtain. We remove these assumptions and present an incentive rule for this setup by bridging the gap between mechanism design and classical cost allocation. We analyze whether the proposed design satisfies various desirable properties such as social optimality, budget balance, participation constraint, and so on. We also analyze which of these properties can be satisfied if the assumptions of cost functions of the agents being private and the agents being anticipatory are relaxed.more » « less
-
This research paper investigates how individual change agents come together to form effective teams. Improving equity within academic engineering requires changes that are often too complex and too high-risk for a faculty member to pursue on their own. Teams offer the advantage of combining a diverse skill set of many individuals, as well as bringing together insider knowledge and external specialist expertise. However, in order for teams of academic change agents to function effectively, they must overcome the challenges of internal politics, power differentials, and group conflict. This analysis of team formation emerges from our participatory action research with recipients of the NSF Revolutionizing Engineering Departments (RED) grants. Through an NSF-funded collaboration between the University of Washington and Rose-Hulman Institute of Technoliogy, we work with the RED teams to research the process of change as they work to improve equity and inclusion within their institutions. Utilizing longitudinal qualitative data from focus group discussions with 16 teams at the beginning and midpoints of their projects, we examine the development of teams to transform engineering education. Drawing on theoretical frameworks from social movement theory, we highlight the importance of creating a unified team voice and developing a sense of group agency. Teams have a better chance of achieving their goals if members are able to create a unified voice—that is, a shared sense of purpose and vision for their team. We find that the development of a team’s unified voice begins with proposal writing. When members of RED teams did not collaboratively write the grant proposal, they found it necessary to devote more time to develop a sense of shared vision for their project. For many RED teams, the development of a unified voice was further strengthened through external messaging, as they articulated a “we” in opposition to a “they” who have different values or interests. Group agency develops as a result of team members perceiving their goals as attainable and their efforts, as both individuals and a group, as worthwhile. That is, group agency is dependent on both the credibility of the team as well as trust among team members. For some of the RED teams, the NSF requirement to include social scientists and education researchers on their teams gave the engineering team members new, increased exposure to these fields. RED teams found that creating mutual respect was foundational for working across disciplinary differences and developing group agency.more » « less
-
Robot technologies have been introduced to computing education to engage learners. This study introduces the concept of co-creation with a robot agent into culturally-responsive computing (CRC). Co- creation with computer agents has previously focused on creating external artifacts. Our work differs by making the robot agent itself the co-created product. Through participatory design activities, we positioned adolescent girls and an agentic social robot as co- creators of the robot’s identity. Taking a thematic analysis approach, we examined how girls embody the role of creator and co-creator in this space. We identified themes surrounding who has the power to make decisions, what decisions are made, and how to maintain social relationship. Our findings suggest that co-creation with robot technology is a promising implementation vehicle for realizing CRC.more » « less
An official website of the United States government

