Intelligent physical systems as embodied cognitive systems must perform high-level reasoning while concurrently managing an underlying control architecture. The link between cognition and control must manage the problem of converting continuous values from the real world to symbolic representations (and back). To generate effective behaviors, reasoning must include a capacity to replan, acquire and update new information, detect and respond to anomalies, and perform various operations on system goals. But, these processes are not independent and need further exploration. This paper examines an agent’s choices when multiple goal operations co-occur and interact, and it establishes a method of choosing between them. We demonstrate the benefits and discuss the trade offs involved with this and show positive results in a dynamic marine search task.
Agent goal management using goal operations
Goal management in autonomous agents has been a problem of interest for a long time. Multiple goal operations are required to solve an agent goal management problem. For example, some goal operations include selection, change, formulation, delegation, monitoring. Many researchers from different fields developed several solution approaches with an implicit or explicit focus on goal operations. For example, some solution approaches include scheduling the agents’ goals, performing cost-benefit analysis to select/organize goals, agent goal formulation in unexpected situations. However, none of them explicitly shed light on the agents’ response when multiple goal operations occur simultaneously. This paper develops an algorithm to address agent goal management when multiple-goal operations co-occur and presents how such an interaction would improve agent goal management in different domains.
- Award ID(s):
- 1849131
- Publication Date:
- NSF-PAR ID:
- 10352593
- Journal Name:
- Proceedings of the 9th Goal Reasoning Workshop
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Cox, Michael T. (Ed.)Goal reasoning agents can solve novel problems by detecting an anomaly between expectations and observations; generating explanations about plausible causes for the anomaly; and formulating goals to remove the cause. Yet not all anomalies represent problems. We claim that the task of discerning the difference between benign anomalies and those that represent an actual problem by an agent will increase its performance. Furthermore, we present a new definition of the term “problem” in a goal reasoning context. This paper discusses the role of explanations and goal formulation in response to developing problems and implements the response. The paper illustrates goal formulation in a mine clearance domain and a labor relations domain. We also show the empirical difference between a standard planning agent, an agent that detects anomalies and an agent that recognizes problems.
-
Goal reasoning agents can solve novel problems by detecting an anomaly between expectations and observations, generating explanations about plausible causes for the anomaly, and formulating goals to remove the cause. Yet, not all anomalies represent problems. This paper addresses discerning the difference between benign anomalies and those that represent an actual problem for an agent. Furthermore, we present a new definition of the term “problem” in a goal reasoning context. This paper discusses the role of explanations and goal formulation in response to the developing problems and implements it; the paper also illustrates the above in a mine clearance domain and a labor relations domain. We also show the empirical difference between a standard planning agent, an agent that detects anomalies, and an agent that recognizes problems.
-
Barták, Roman ; Bell, Eric (Ed.)Autonomous agents often have sufficient resources to achieve the goals that are provided to them. However, in dynamic worlds where unexpected problems are bound to occur, an agent may formulate new goals with further resource requirements. Thus, agents should be smart enough to man-age their goals and the limited resources they possess in an effective and flexible manner. We present an approach to the selection and monitoring of goals using resource estimation and goal priorities. To evaluate our approach, we designed an experiment on top of our previous work in a complex mine-clearance domain. The agent in this domain formulates its own goals by retrieving a case to explain uncovered discrepancies and generating goals from the explanation. Finally, we compare the performance of our approach to two alternatives.
-
Humans and autonomous agents often have differing knowledge about the world, the goals they pursue, and the actions they perform. Given these differences, an autonomous agent should be capable of rebelling against a goal when its completion would violate that agent’s preferences and motivations. Prior work on agent rebellion has examined agents that can reject actions leading to harmful consequences. Here we elaborate on a specific justification for rebellion in terms of violated goal expectations. Further, the need for rebellion is not always known in advance. So to rebel correctly and justifiably in response to unforeseen circumstances, an autonomous agent must be able to learn the reasons behind violations of its expectations. This paper provides a novel framework for rebellion within a metacognitive architecture using goal monitoring and model learning, and it includes experimental results showing the efficacy of such rebellion.