skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Friday, December 8 until 2:00 AM ET on Saturday, December 9 due to maintenance. We apologize for the inconvenience.


Title: Problem recognition, explanation and goal formulation
Goal reasoning agents can solve novel problems by detecting an anomaly between expectations and observations; generating explanations about plausible causes for the anomaly; and formulating goals to remove the cause. Yet not all anomalies represent problems. We claim that the task of discerning the difference between benign anomalies and those that represent an actual problem by an agent will increase its performance. Furthermore, we present a new definition of the term “problem” in a goal reasoning context. This paper discusses the role of explanations and goal formulation in response to developing problems and implements the response. The paper illustrates goal formulation in a mine clearance domain and a labor relations domain. We also show the empirical difference between a standard planning agent, an agent that detects anomalies and an agent that recognizes problems.  more » « less
Award ID(s):
1849131
NSF-PAR ID:
10347605
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Cox, Michael T.
Date Published:
Journal Name:
Proceedings of the Seventh Annual Conference on Advances in Cognitive Systems
Page Range / eLocation ID:
437-452
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Goal reasoning agents can solve novel problems by detecting an anomaly between expectations and observations, generating explanations about plausible causes for the anomaly, and formulating goals to remove the cause. Yet, not all anomalies represent problems. This paper addresses discerning the difference between benign anomalies and those that represent an actual problem for an agent. Furthermore, we present a new definition of the term “problem” in a goal reasoning context. This paper discusses the role of explanations and goal formulation in response to the developing problems and implements it; the paper also illustrates the above in a mine clearance domain and a labor relations domain. We also show the empirical difference between a standard planning agent, an agent that detects anomalies, and an agent that recognizes problems. 
    more » « less
  2. In human-aware planning systems, a planning agent might need to explain its plan to a human user when that plan appears to be non-feasible or sub-optimal. A popular approach, called model reconciliation, has been proposed as a way to bring the model of the human user closer to the agent’s model. To do so, the agent provides an explanation that can be used to update the model of human such that the agent’s plan is feasible or optimal to the human user. Existing approaches to solve this problem have been based on automated planning methods and have been limited to classical planning problems only. In this paper, we approach the model reconciliation problem from a different perspective, that of knowledge representation and reasoning, and demonstrate that our approach can be applied not only to classical planning problems but also hybrid systems planning problems with durative actions and events/processes. In particular, we propose a logic-based framework for explanation generation, where given a knowledge base KBa (of an agent) and a knowledge base KBh (of a human user), each encoding their knowledge of a planning problem, and that KBa entails a query q (e.g., that a proposed plan of the agent is valid), the goal is to identify an explanation ε ⊆ KBa such that when it is used to update KBh, then the updated KBh also entails q. More specifically, we make the following contributions in this paper: (1) We formally define the notion of logic-based explanations in the context of model reconciliation problems; (2) We introduce a number of cost functions that can be used to reflect preferences between explanations; (3) We present algorithms to compute explanations for both classical planning and hybrid systems planning problems; and (4) We empirically evaluate their performance on such problems. Our empirical results demonstrate that, on classical planning problems, our approach is faster than the state of the art when the explanations are long or when the size of the knowledge base is small (e.g., the plans to be explained are short). They also demonstrate that our approach is efficient for hybrid systems planning problems. Finally, we evaluate the real-world efficacy of explanations generated by our algorithms through a controlled human user study, where we develop a proof-of-concept visualization system and use it as a medium for explanation communication. 
    more » « less
  3. Goal management in autonomous agents has been a problem of interest for a long time. Multiple goal operations are required to solve an agent goal management problem. For example, some goal operations include selection, change, formulation, delegation, monitoring. Many researchers from different fields developed several solution approaches with an implicit or explicit focus on goal operations. For example, some solution approaches include scheduling the agents’ goals, performing cost-benefit analysis to select/organize goals, agent goal formulation in unexpected situations. However, none of them explicitly shed light on the agents’ response when multiple goal operations occur simultaneously. This paper develops an algorithm to address agent goal management when multiple-goal operations co-occur and presents how such an interaction would improve agent goal management in different domains. 
    more » « less
  4. null (Ed.)
    The ability to detect failures and anomalies are fundamental requirements for building reliable systems for computer vision applications, especially safety-critical applications of semantic segmentation, such as autonomous driving and medical image analysis. In this paper, we systematically study failure and anomaly detection for semantic segmentation and propose a unified framework, consisting of two modules, to address these two related problems. The first module is an image synthesis module, which generates a synthesized image from a segmentation layout map, and the second is a comparison module, which computes the difference between the synthesized image and the input image. We validate our framework on three challenging datasets and improve the state-of-the-arts by large margins, i.e., 6% AUPR-Error on Cityscapes, 7% Pearson correlation on pancreatic tumor segmentation in MSD and 20% AUPR on StreetHazards anomaly segmentatio 
    more » « less
  5. This work in progress paper describes ongoing work to understand the ways in which students make use of manipulatives to develop their representational competence and deepen their conceptual understanding of course content. Representational competence refers to the fluency with which a subject expert can move between different representations of a concept (e.g. mathematical, symbolic, graphical, 2D vs. 3D, pictorial) as appropriate for communication, reasoning, and problem solving. Several hands-on activities for engineering statics have been designed and implemented in face-to-face courses since fall 2016. In the transition to online learning in response to the COVID 19 pandemic, modeling kits were sent home to students so they could work on the activities at their own pace and complete the associated worksheets. An assignment following the vector activities required students to create videotaped or written reflections with annotated pictures using the models to explain their thinking around key concepts. Students made connections between abstract symbolic representations and their physical models to explain concepts such as a general 3D unit vector, the difference between spherical coordinate angles and coordinate direction angles, and the meaning of decomposing a vector into components perpendicular and parallel to a line. Thematic analysis of the video and written data was used to develop codes and identify themes in students’ use of the models as it relates to developing representational competence. The student submissions also informed the design of think-aloud exercises in one-on-one semi-structured interviews between researchers and students that are currently in progress. This paper presents initial work analyzing and discussing themes that emerged from the initial video and written analysis and plans for the subsequent think-aloud interviews, all focused on the specific attributes of the models that students use to make sense of course concepts. The ultimate goal of this work is to develop some general guidelines for the design of manipulatives to support student learning in a variety of STEM topics. 
    more » « less