skip to main content


Title: Explaining in Time: Meeting Interactive Standards of Explanation for Robotic Systems
Explainability has emerged as a critical AI research objective, but the breadth of proposed methods and application domains suggest that criteria for explanation vary greatly. In particular, what counts as a good explanation, and what kinds of explanation are computationally feasible, has become trickier in light of oqaque “black box” systems such as deep neural networks. Explanation in such cases has drifted from what many philosophers stipulated as having to involve deductive and causal principles to mere “interpretation,” which approximates what happened in the target system to varying degrees. However, such post hoc constructed rationalizations are highly problematic for social robots that operate interactively in spaces shared with humans. For in such social contexts, explanations of behavior, and, in particular, justifications for violations of expected behavior, should make reference to socially accepted principles and norms. In this article, we show how a social robot’s actions can face explanatory demands for how it came to act on its decision, what goals, tasks, or purposes its design had those actions pursue and what norms or social constraints the system recognizes in the course of its action. As a result, we argue that explanations for social robots will need to be accurate representations of the system’s operation along causal, purposive, and justificatory lines. These explanations will need to generate appropriate references to principles and norms—explanations based on mere “interpretability” will ultimately fail to connect the robot’s behaviors to its appropriate determinants. We then lay out the foundations for a cognitive robotic architecture for HRI, together with particular component algorithms, for generating explanations and engaging in justificatory dialogues with human interactants. Such explanations track the robot’s actual decision-making and behavior, which themselves are determined by normative principles the robot can describe and use for justifications.  more » « less
Award ID(s):
1723963
NSF-PAR ID:
10301358
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ACM Transactions on Human-Robot Interaction
Volume:
10
Issue:
3
ISSN:
2573-9522
Page Range / eLocation ID:
1 to 23
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Explanations of AI Agents' actions are considered to be an important factor in improving users' trust in the decisions made by autonomous AI systems. However, as these autonomous systems evolve from reactive, i.e., acting on user input, to proactive, i.e., acting without requiring user intervention, there is a need to explore how the explanation for the actions of these agents should evolve. In this work, we explore the design of explanations through participatory design methods for a proactive auto-response messaging agent that can reduce perceived obligations and social pressure to respond quickly to incoming messages by providing unavailability-related context. We recruited 14 participants who worked in pairs during collaborative design sessions where they reasoned about the agent's design and actions. We qualitatively analyzed the data collected through these sessions and found that participants' reasoning about agent actions led them to speculate heavily on its design. These speculations significantly influenced participants' desire for explanations and the controls they sought to inform the agents' behavior. Our findings indicate a need to transform users' speculations into accurate mental models of agent design. Further, since the agent acts as a mediator in human-human communication, it is also necessary to account for social norms in its explanation design. Finally, user expertise in understanding their habits and behaviors allows the agent to learn from the user their preferences when justifying its actions.

     
    more » « less
  2. Discussion of the “right to an explanation” has been increasingly relevant because of its potential utility for auditing automated decision systems, as well as for making objections to such decisions. However, most existing work on explanations focuses on collaborative environments, where designers are motivated to implement good-faith explanations that reveal potential weaknesses of a decision system. This motivation may not hold in an auditing environment. Thus, we ask: how much could explanations be used maliciously to defend a decision system? In this paper, we demonstrate how a black-box explanation system developed to defend a black-box decision system could manipulate decision recipients or auditors into accepting an intentionally discriminatory decision model. In a case-by-case scenario where decision recipients are unable to share their cases and explanations, we find that most individual decision recipients could receive a verifiable justification, even if the decision system is intentionally discriminatory. In a system-wide scenario where every decision is shared, we find that while justifications frequently contradict each other, there is no intuitive threshold to determine if these contradictions are because of malicious justifications or because of simplicity requirements of these justifications conflicting with model behavior. We end with discussion of how system-wide metrics may be more useful than explanation systems for evaluating overall decision fairness, while explanations could be useful outside of fairness auditing. 
    more » « less
  3. Mobile robots must navigate efficiently, reliably, and appropriately around people when acting in shared social environments. For robots to be accepted in such environments, we explore robot navigation for the social contexts of each setting. Navigating through dynamic environments solely considering a collision-free path has long been solved. In human-robot environments, the challenge is no longer about efficiently navigating from one point to another. Autonomously detecting the context and adapting to an appropriate social navigation strategy is vital for social robots’ long-term applicability in dense human environments. As complex social environments, museums are suitable for studying such behavior as they have many different navigation contexts in a small space.Our prior Socially-Aware Navigation model considered con-text classification, object detection, and pre-defined rules to define navigation behavior in more specific contexts, such as a hallway or queue. This work uses environmental context, object information, and more realistic interaction rules for complex social spaces. In the first part of the project, we convert real-world interactions into algorithmic rules for use in a robot’s navigation system. Moreover, we use context recognition, object detection, and scene data for context-appropriate rule selection. We introduce our methodology of studying social behaviors in complex contexts, different analyses of our text corpus for museums, and the presentation of extracted social norms. Finally, we demonstrate applying some of the rules in scenarios in the simulation environment. 
    more » « less
  4. The ability to provide comprehensive explanations of chosen actions is a hallmark of intelligence. Lack of this ability impedes the general acceptance of AI and robot systems in critical tasks. This paper examines what forms of explanations best foster human trust in machines and proposes a framework in which explanations are generated from both functional and mechanistic perspectives. The robot system learns from human demonstrations to open medicine bottles using (i) an embodied haptic prediction model to extract knowledge from sensory feedback, (ii) a stochastic grammar model induced to capture the compositional structure of a multistep task, and (iii) an improved Earley parsing algorithm to jointly leverage both the haptic and grammar models. The robot system not only shows the ability to learn from human demonstrators but also succeeds in opening new, unseen bottles. Using different forms of explanations generated by the robot system, we conducted a psychological experiment to examine what forms of explanations best foster human trust in the robot. We found that comprehensive and real-time visualizations of the robot’s internal decisions were more effective in promoting human trust than explanations based on summary text descriptions. In addition, forms of explanation that are best suited to foster trust do not necessarily correspond to the model components contributing to the best task performance. This divergence shows a need for the robotics community to integrate model components to enhance both task execution and human trust in machines. 
    more » « less
  5. null (Ed.)
    People come in different shapes and sizes, but most will perform similarly well if asked to complete a task requiring fine manual dexterity – such as holding a pen or picking up a single grape. How can different individuals, with different sized hands and muscles, produce such similar movements? One explanation is that an individual’s brain and nervous system become precisely tuned to mechanics of the body’s muscles and skeleton. An alternative explanation is that brain and nervous system use a more “robust” control policy that can compensate for differences in the body by relying on feedback from the senses to guide the movements. To distinguish between these two explanations, Uyanik et al. turned to weakly electric freshwater fish known as glass knifefish. These fish seek refuge within root systems, reed grass and among other objects in the water. They swim backwards and forwards to stay hidden despite constantly changing currents. Each fish shuttles back and forth by moving a long ribbon-like fin on the underside of its body. Uyanik et al. measured the movements of the ribbon fin under controlled conditions in the laboratory, and then used the data to create computer models of the brain and body of each fish. The models of each fish’s brain and body were quite different. To study how the brain interacts with the body, Uyanik et al. then conducted experiments reminiscent of those described in the story of Frankenstein and transplanted the brain from each computer model into the body of different model fish. These “brain swaps” had almost no effect on the model’s simulated swimming behavior. Instead, these “Frankenfish” used sensory feedback to compensate for any mismatch between their brain and body. This suggests that, for some behaviors, an animal’s brain does not need to be precisely tuned to the specific characteristics of its body. Instead, robust control of movement relies on many seemingly redundant systems that provide sensory feedback. This has implications for the field of robotics. It further suggests that when designing robots, engineers should prioritize enabling the robots to use sensory feedback to cope with unexpected events, a well-known idea in control engineering. 
    more » « less