Explanations of AI Agents' actions are considered to be an important factor in improving users' trust in the decisions made by autonomous AI systems. However, as these autonomous systems evolve from reactive, i.e., acting on user input, to proactive, i.e., acting without requiring user intervention, there is a need to explore how the explanation for the actions of these agents should evolve. In this work, we explore the design of explanations through participatory design methods for a proactive auto-response messaging agent that can reduce perceived obligations and social pressure to respond quickly to incoming messages by providing unavailability-related context. We recruited 14 participants who worked in pairs during collaborative design sessions where they reasoned about the agent's design and actions. We qualitatively analyzed the data collected through these sessions and found that participants' reasoning about agent actions led them to speculate heavily on its design. These speculations significantly influenced participants' desire for explanations and the controls they sought to inform the agents' behavior. Our findings indicate a need to transform users' speculations into accurate mental models of agent design. Further, since the agent acts as a mediator in human-human communication, it is also necessary to account for social norms in its explanation design. Finally, user expertise in understanding their habits and behaviors allows the agent to learn from the user their preferences when justifying its actions.
more »
« less
Designing Preferences, Beliefs, and Identities for Artificial Intelligence
Research in artificial intelligence, as well as in economics and other related fields, generally proceeds from the premise that each agent has a well-defined identity, well-defined preferences over outcomes, and well-defined beliefs about the world. However, as we design AI systems, we in fact need to specify where the boundaries between one agent and another in the system lie, what objective functions these agents aim to maximize, and to some extent even what belief formation processes they use. The premise of this paper is that as AI is being broadly deployed in the world, we need well-founded theories of, and methodologies and algorithms for, how to design preferences, identities, and beliefs. This paper lays out an approach to address these problems from a rigorous foundation in decision theory, game theory, social choice theory, and the algorithmic and computational aspects of these fields.
more »
« less
- Award ID(s):
- 1814056
- PAR ID:
- 10196244
- Date Published:
- Journal Name:
- Proceedings of the AAAI Conference on Artificial Intelligence
- Volume:
- 33
- ISSN:
- 2159-5399
- Page Range / eLocation ID:
- 9755 to 9759
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Artificial intelligence (AI) and cybersecurity are in-demand skills, but little is known about what factors influence computer science (CS) undergraduate students' decisions on whether to specialize in AI or cybersecurity and how these factors may differ between populations. In this study, we interviewed undergraduate CS majors about their perceptions of AI and cybersecurity. Qualitative analyses of these interviews show that students have narrow beliefs about what kind of work AI and cybersecurity entail, the kinds of people who work in these fields, and the potential societal impact AI and cybersecurity may have. Specifically, students tended to believe that all work in AI requires math and training models, while cybersecurity consists of low-level programming; that innately smart people work in both fields; that working in AI comes with ethical concerns; and that cybersecurity skills are important in contemporary society. Some of these perceptions reinforce existing stereotypes about computing and may disproportionately affect the participation of students from groups historically underrepresented in computing. Our key contribution is identifying beliefs that students expressed about AI and cybersecurity that may affect their interest in pursuing the two fields and may, therefore, inform efforts to expand students' views of AI and cybersecurity. Expanding student perceptions of AI and cybersecurity may help correct misconceptions and challenge narrow definitions, which in turn can encourage participation in these fields from all students.more » « less
-
In multi-agent domains (MADs), an agent's action may not just change the world and the agent's knowledge and beliefs about the world, but also may change other agents' knowledge and beliefs about the world and their knowledge and beliefs about other agents' knowledge and beliefs about the world. The goals of an agent in a multi-agent world may involve manipulating the knowledge and beliefs of other agents' and again, not just their knowledge/belief about the world, but also their knowledge about other agents' knowledge about the world. Our goal is to present an action language (mA+) that has the necessary features to address the above aspects in representing and RAC in MADs. mA+ allows the representation of and reasoning about different types of actions that an agent can perform in a domain where many other agents might be present -- such as world-altering actions, sensing actions, and announcement/communication actions. It also allows the specification of agents' dynamic awareness of action occurrences which has future implications on what agents' know about the world and other agents' knowledge about the world. mA+ considers three different types of awareness: full-, partial- awareness, and complete oblivion of an action occurrence and its effects. This keeps the language simple, yet powerful enough to address a large variety of knowledge manipulation scenarios in MADs. The semantics of mA+ relies on the notion of state, which is described by a pointed Kripke model and is used to encode the agent's knowledge and the real state of the world. It is defined by a transition function that maps pairs of actions and states into sets of states. We illustrate properties of the action theories, including properties that guarantee finiteness of the set of initial states and their practical implementability. Finally, we relate mA+ to other related formalisms that contribute to RAC in MADs.more » « less
-
null (Ed.)In this paper we explore what role humans might play in designing tools for reinforcement learning (RL) agents to interact with the world. Recent work has explored RL methods that optimize a robot’s morphology while learning to control it, effectively dividing an RL agent’s environment into the external world and the agent’s interface with the world. Taking a user-centered design (UCD) approach, we explore the potential of a human, instead of an algorithm, redesigning the agent’s tool. Using UCD to design for a machine learning agent brings up several research questions, including what it means to understand an RL agent’s experience, beliefs, tendencies, and goals. After discussing these questions, we then present a system we developed to study humans designing a 2D racecar for an RL autonomous driver. We conclude with findings and insights from exploratory pilots with twelve users using this system.more » « less
-
null (Ed.)The paper introduces the notion of an epistemic argumentation framework (EAF) as a means to integrate the beliefs of a reasoner with argumentation. Intuitively, an EAF encodes the beliefs of an agent who reasons about arguments. Formally, an EAF is a pair of an argumentation framework and an epistemic constraint. The semantics of the EAF is defined by the notion of an ω-epistemic labelling set, where ω is complete, stable, grounded, or preferred, which is a set of ω-labellings that collectively satisfies the epistemic constraint of the EAF. The paper shows how EAF can represent different views of reasoners on the same argumentation framework. It also includes representing preferences in EAF and multi-agent argumentation. Finally, the paper discusses complexity issues and computation using epistemic logic programming.more » « less
An official website of the United States government

