Explanations of AI Agents' actions are considered to be an important factor in improving users' trust in the decisions made by autonomous AI systems. However, as these autonomous systems evolve from reactive, i.e., acting on user input, to proactive, i.e., acting without requiring user intervention, there is a need to explore how the explanation for the actions of these agents should evolve. In this work, we explore the design of explanations through participatory design methods for a proactive auto-response messaging agent that can reduce perceived obligations and social pressure to respond quickly to incoming messages by providing unavailability-related context. We recruited 14 participants who worked in pairs during collaborative design sessions where they reasoned about the agent's design and actions. We qualitatively analyzed the data collected through these sessions and found that participants' reasoning about agent actions led them to speculate heavily on its design. These speculations significantly influenced participants' desire for explanations and the controls they sought to inform the agents' behavior. Our findings indicate a need to transform users' speculations into accurate mental models of agent design. Further, since the agent acts as a mediator in human-human communication, it is also necessary to account for social norms in its explanation design. Finally, user expertise in understanding their habits and behaviors allows the agent to learn from the user their preferences when justifying its actions.
more »
« less
Rebel agents that adapt to goal expectation failures
Humans and autonomous agents often have differing knowledge about the world, the goals they pursue, and the actions they perform. Given these differences, an autonomous agent should be capable of rebelling against a goal when its completion would violate that agent’s preferences and motivations. Prior work on agent rebellion has examined agents that can reject actions leading to harmful consequences. Here we elaborate on a specific justification for rebellion in terms of violated goal expectations. Further, the need for rebellion is not always known in advance. So to rebel correctly and justifiably in response to unforeseen circumstances, an autonomous agent must be able to learn the reasons behind violations of its expectations. This paper provides a novel framework for rebellion within a metacognitive architecture using goal monitoring and model learning, and it includes experimental results showing the efficacy of such rebellion.
more »
« less
- Award ID(s):
- 1849131
- PAR ID:
- 10349752
- Date Published:
- Journal Name:
- Proceedings of the Integrated Execution / Goal Reasoning Workshop (held at the Thirtieth International Conference on Automated Planning and Scheduling - ICAPS-20)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Autonomous agents in a multi-agent system work with each other to achieve their goals. However, In a partially observable world, current multi-agent systems are often less effective in achieving their goals. This limitation is due to the agents’ lack of reasoning about other agents and their mental states. Another factor is the agents’ inability to share required knowledge with other agents. This paper addresses the limitations by presenting a general approach for autonomous agents to work together in a multi-agent system. In this approach, an agent applies two main concepts: goal reasoning- to determine what goals to pursue and share; Theory of mind-to select an agent(s) for sharing goals and knowledge. We evaluate the performance of our multi-agent system in a Marine Life Survey Domain and compare it to another multi-agent system that randomly selects agent(s) to delegates its goals.more » « less
-
In open agent systems, the set of agents that are cooperating or competing changes over time and in ways that are nontrivial to predict. For example, if collaborative robots were tasked with fighting wildfires, they may run out of suppressants and be temporarily unavailable to assist their peers. We consider the problem of planning in these contexts with the additional challenges that the agents are unable to communicate with each other and that there are many of them. Because an agent's optimal action depends on the actions of others, each agent must not only predict the actions of its peers, but, before that, reason whether they are even present to perform an action. Addressing openness thus requires agents to model each other's presence, which becomes computationally intractable with high numbers of agents. We present a novel, principled, and scalable method in this context that enables an agent to reason about others' presence in its shared environment and their actions. Our method extrapolates models of a few peers to the overall behavior of the many-agent system, and combines it with a generalization of Monte Carlo tree search to perform individual agent reasoning in many-agent open environments. Theoretical analyses establish the number of agents to model in order to achieve acceptable worst case bounds on extrapolation error, as well as regret bounds on the agent's utility from modeling only some neighbors. Simulations of multiagent wildfire suppression problems demonstrate our approach's efficacy compared with alternative baselines.more » « less
-
Barták, Roman; Bell, Eric (Ed.)Autonomous agents often have sufficient resources to achieve the goals that are provided to them. However, in dynamic worlds where unexpected problems are bound to occur, an agent may formulate new goals with further resource requirements. Thus, agents should be smart enough to man-age their goals and the limited resources they possess in an effective and flexible manner. We present an approach to the selection and monitoring of goals using resource estimation and goal priorities. To evaluate our approach, we designed an experiment on top of our previous work in a complex mine-clearance domain. The agent in this domain formulates its own goals by retrieving a case to explain uncovered discrepancies and generating goals from the explanation. Finally, we compare the performance of our approach to two alternatives.more » « less
-
Goal management in autonomous agents has been a problem of interest for a long time. Multiple goal operations are required to solve an agent goal management problem. For example, some goal operations include selection, change, formulation, delegation, monitoring. Many researchers from different fields developed several solution approaches with an implicit or explicit focus on goal operations. For example, some solution approaches include scheduling the agents’ goals, performing cost-benefit analysis to select/organize goals, agent goal formulation in unexpected situations. However, none of them explicitly shed light on the agents’ response when multiple goal operations occur simultaneously. This paper develops an algorithm to address agent goal management when multiple-goal operations co-occur and presents how such an interaction would improve agent goal management in different domains.more » « less