skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The rational selection of goal operations and the integration of search strategies with goal-driven autonomy
Award ID(s):
1848945
PAR ID:
10337920
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the Ninth Annual Conference on Advances in Cognitive Systems (ACS-2021)
Page Range / eLocation ID:
Paper #8 (20 pp)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The Achievement Goal Framework describes students’ goal orientations as: task-based, focusing on the successful completion of the task; self-based, evaluating performance relative to one's own past performance; or other-based, evaluating performance relative to the performance of others. Goal orientations have been used to explain student success in a range of educational settings, but have not been used in post-secondary chemistry. This study describes the goal orientations of General Chemistry students and explores the relationship of goal orientations to success in the course. On average, students report higher task and self orientations than other orientation. Task orientation had a positive relationship with exam performance and self orientation had a negative relationship with exam performance. Clustering students showed that for the majority of students task and self orientations moved concurrently and students with low preference across the three orientations also performed lowest on exams. Finally, students in classes using Flipped-Peer Led Team Learning, a pedagogy designed to bring active learning to a large lecture class, showed higher task orientation than those in classes with lecture-based instruction. 
    more » « less
  2. Goal management in autonomous agents has been a problem of interest for a long time. Multiple goal operations are required to solve an agent goal management problem. For example, some goal operations include selection, change, formulation, delegation, monitoring. Many researchers from different fields developed several solution approaches with an implicit or explicit focus on goal operations. For example, some solution approaches include scheduling the agents’ goals, performing cost-benefit analysis to select/organize goals, agent goal formulation in unexpected situations. However, none of them explicitly shed light on the agents’ response when multiple goal operations occur simultaneously. This paper develops an algorithm to address agent goal management when multiple-goal operations co-occur and presents how such an interaction would improve agent goal management in different domains. 
    more » « less
  3. Intelligent physical systems as embodied cognitive systems must perform high-level reasoning while concurrently managing an underlying control architecture. The link between cognition and control must manage the problem of converting continuous values from the real world to symbolic representations (and back). To generate effective behaviors, reasoning must include a capacity to replan, acquire and update new information, detect and respond to anomalies, and perform various operations on system goals. But, these processes are not independent and need further exploration. This paper examines an agent’s choices when multiple goal operations co-occur and interact, and it establishes a method of choosing between them. We demonstrate the benefits and discuss the trade offs involved with this and show positive results in a dynamic marine search task. 
    more » « less
  4. Exploration and reward specification are fundamental and intertwined challenges for reinforcement learning. Solving sequential decision-making tasks requiring expansive exploration requires either careful design of reward functions or the use of novelty-seeking exploration bonuses. Human supervisors can provide effective guidance in the loop to direct the exploration process, but prior methods to leverage this guidance require constant synchronous high-quality human feedback, which is expensive and impractical to obtain. In this work, we present a technique called Human Guided Exploration (HuGE), which uses low-quality feedback from non-expert users that may be sporadic, asynchronous, and noisy. HuGE guides exploration for reinforcement learning not only in simulation but also in the real world, all without meticulous reward specification. The key concept involves bifurcating human feedback and policy learning: human feedback steers exploration, while self-supervised learning from the exploration data yields unbiased policies. This procedure can leverage noisy, asynchronous human feedback to learn policies with no hand-crafted reward design or exploration bonuses. HuGE is able to learn a variety of challenging multi-stage robotic navigation and manipulation tasks in simulation using crowdsourced feedback from non-expert users. Moreover, this paradigm can be scaled to learning directly on real-world robots, using occasional, asynchronous feedback from human supervisors. 
    more » « less
  5. The study of goal-reasoning agents capable of integrated action and execution has received a great deal of attention in recent years. While practical implementations and theoretical insights of such agents have provided a wealth of flexible behavior in a variety of task environments, they tend to focus on complex environments that are far from classical planning assumptions. This paper formalizes classical planning problems where an agent can change its goal(s) during execution. We identify the minimal changes to classical planning and formalize a model that supports "classical goal reasoning." 
    more » « less