skip to main content

Title: Explaining in Time: Meeting Interactive Standards of Explanation for Robotic Systems
Explainability has emerged as a critical AI research objective, but the breadth of proposed methods and application domains suggest that criteria for explanation vary greatly. In particular, what counts as a good explanation, and what kinds of explanation are computationally feasible, has become trickier in light of oqaque “black box” systems such as deep neural networks. Explanation in such cases has drifted from what many philosophers stipulated as having to involve deductive and causal principles to mere “interpretation,” which approximates what happened in the target system to varying degrees. However, such post hoc constructed rationalizations are highly problematic for social robots that operate interactively in spaces shared with humans. For in such social contexts, explanations of behavior, and, in particular, justifications for violations of expected behavior, should make reference to socially accepted principles and norms. In this article, we show how a social robot’s actions can face explanatory demands for how it came to act on its decision, what goals, tasks, or purposes its design had those actions pursue and what norms or social constraints the system recognizes in the course of its action. As a result, we argue that explanations for social robots will need to be accurate representations more » of the system’s operation along causal, purposive, and justificatory lines. These explanations will need to generate appropriate references to principles and norms—explanations based on mere “interpretability” will ultimately fail to connect the robot’s behaviors to its appropriate determinants. We then lay out the foundations for a cognitive robotic architecture for HRI, together with particular component algorithms, for generating explanations and engaging in justificatory dialogues with human interactants. Such explanations track the robot’s actual decision-making and behavior, which themselves are determined by normative principles the robot can describe and use for justifications. « less
Authors:
; ;
Award ID(s):
1723963
Publication Date:
NSF-PAR ID:
10301358
Journal Name:
ACM Transactions on Human-Robot Interaction
Volume:
10
Issue:
3
Page Range or eLocation-ID:
1 to 23
ISSN:
2573-9522
Sponsoring Org:
National Science Foundation
More Like this
  1. The ability to provide comprehensive explanations of chosen actions is a hallmark of intelligence. Lack of this ability impedes the general acceptance of AI and robot systems in critical tasks. This paper examines what forms of explanations best foster human trust in machines and proposes a framework in which explanations are generated from both functional and mechanistic perspectives. The robot system learns from human demonstrations to open medicine bottles using (i) an embodied haptic prediction model to extract knowledge from sensory feedback, (ii) a stochastic grammar model induced to capture the compositional structure of a multistep task, and (iii) an improved Earley parsing algorithm to jointly leverage both the haptic and grammar models. The robot system not only shows the ability to learn from human demonstrators but also succeeds in opening new, unseen bottles. Using different forms of explanations generated by the robot system, we conducted a psychological experiment to examine what forms of explanations best foster human trust in the robot. We found that comprehensive and real-time visualizations of the robot’s internal decisions were more effective in promoting human trust than explanations based on summary text descriptions. In addition, forms of explanation that are best suited to foster trustmore »do not necessarily correspond to the model components contributing to the best task performance. This divergence shows a need for the robotics community to integrate model components to enhance both task execution and human trust in machines.« less
  2. People come in different shapes and sizes, but most will perform similarly well if asked to complete a task requiring fine manual dexterity – such as holding a pen or picking up a single grape. How can different individuals, with different sized hands and muscles, produce such similar movements? One explanation is that an individual’s brain and nervous system become precisely tuned to mechanics of the body’s muscles and skeleton. An alternative explanation is that brain and nervous system use a more “robust” control policy that can compensate for differences in the body by relying on feedback from the senses to guide the movements. To distinguish between these two explanations, Uyanik et al. turned to weakly electric freshwater fish known as glass knifefish. These fish seek refuge within root systems, reed grass and among other objects in the water. They swim backwards and forwards to stay hidden despite constantly changing currents. Each fish shuttles back and forth by moving a long ribbon-like fin on the underside of its body. Uyanik et al. measured the movements of the ribbon fin under controlled conditions in the laboratory, and then used the data to create computer models of the brain and body ofmore »each fish. The models of each fish’s brain and body were quite different. To study how the brain interacts with the body, Uyanik et al. then conducted experiments reminiscent of those described in the story of Frankenstein and transplanted the brain from each computer model into the body of different model fish. These “brain swaps” had almost no effect on the model’s simulated swimming behavior. Instead, these “Frankenfish” used sensory feedback to compensate for any mismatch between their brain and body. This suggests that, for some behaviors, an animal’s brain does not need to be precisely tuned to the specific characteristics of its body. Instead, robust control of movement relies on many seemingly redundant systems that provide sensory feedback. This has implications for the field of robotics. It further suggests that when designing robots, engineers should prioritize enabling the robots to use sensory feedback to cope with unexpected events, a well-known idea in control engineering.« less
  3. In human-aware planning systems, a planning agent might need to explain its plan to a human user when that plan appears to be non-feasible or sub-optimal. A popular approach, called model reconciliation, has been proposed as a way to bring the model of the human user closer to the agent’s model. To do so, the agent provides an explanation that can be used to update the model of human such that the agent’s plan is feasible or optimal to the human user. Existing approaches to solve this problem have been based on automated planning methods and have been limited to classical planning problems only. In this paper, we approach the model reconciliation problem from a different perspective, that of knowledge representation and reasoning, and demonstrate that our approach can be applied not only to classical planning problems but also hybrid systems planning problems with durative actions and events/processes. In particular, we propose a logic-based framework for explanation generation, where given a knowledge base KBa (of an agent) and a knowledge base KBh (of a human user), each encoding their knowledge of a planning problem, and that KBa entails a query q (e.g., that a proposed plan of the agent ismore »valid), the goal is to identify an explanation ε ⊆ KBa such that when it is used to update KBh, then the updated KBh also entails q. More specifically, we make the following contributions in this paper: (1) We formally define the notion of logic-based explanations in the context of model reconciliation problems; (2) We introduce a number of cost functions that can be used to reflect preferences between explanations; (3) We present algorithms to compute explanations for both classical planning and hybrid systems planning problems; and (4) We empirically evaluate their performance on such problems. Our empirical results demonstrate that, on classical planning problems, our approach is faster than the state of the art when the explanations are long or when the size of the knowledge base is small (e.g., the plans to be explained are short). They also demonstrate that our approach is efficient for hybrid systems planning problems. Finally, we evaluate the real-world efficacy of explanations generated by our algorithms through a controlled human user study, where we develop a proof-of-concept visualization system and use it as a medium for explanation communication.« less
  4. A quiet revolution is afoot in the field of law. Technical systems employing algorithms are shaping and displacing professional decision making, and they are disrupting and restructuring relationships between law firms, lawyers, and clients. Decision-support systems marketed to legal professionals to support e-discovery—generally referred to as “technology assisted review” (TAR)—increasingly rely on “predictive coding”: machine-learning techniques to classify and predict which of the voluminous electronic documents subject to litigation should be withheld or produced to the opposing side. These systems and the companies offering them are reshaping relationships between lawyers and clients, introducing new kinds of professionals into legal practice, altering the discovery process, and shaping how lawyers construct knowledge about their cases and professional obligations. In the midst of these shifting relationships—and the ways in which these systems are shaping the construction and presentation of knowledge—lawyers are grappling with their professional obligations, ethical duties, and what it means for the future of legal practice. Through in-depth, semi-structured interviews of experts in the e-discovery technology space—the technology company representatives who develop and sell such systems to law firms and the legal professionals who decide whether and how to use them in practice—we shed light on the organizational structures, professional rulesmore »and norms, and technical system properties that are shaping and being reshaped by predictive coding systems. Our findings show that AI-supported decision systems such as these are reconfiguring professional work practices. In particular, they highlight concerns about potential loss of professional agency and skill, limited understanding and thereby both over- and under reliance on decision-support systems, and confusion about responsibility and accountability as new kinds of technical professionals and technologies are brought into legal practice. The introduction of predictive coding systems and the new professional and organizational arrangements they are ushering into legal practice compound general concerns over the opacity of technical systems with specific concerns about encroachments on the construction of expert knowledge, liability frameworks, and the potential (mis)alignment of machine reasoning with professional logic and ethics. Based on our findings, we conclude that predictive coding tools—and likely other algorithmic systems lawyers use to construct knowledge and reason about legal practice— challenge the current model for evaluating whether and how tools are appropriate for legal practice. As tools become both more complex and more consequential, it is unreasonable to rely solely on legal professionals—judges, law firms, and lawyers—to determine which technologies are appropriate for use. The legal professionals we interviewed report relying on the evaluation and judgment of a range of new technical experts within law firms and, increasingly, third-party vendors and their technical experts. This system for choosing technical systems upon which lawyers rely to make professional decisions—e.g., whether documents are responsive, or whether the standard of proportionality has been met—is no longer sufficient. As the tools of medicine are reviewed by appropriate experts before they are put out for consideration and adoption by medical professionals, we argue that the legal profession must develop new processes for determining which algorithmic tools are fit to support lawyers’ decision making. Relatedly, because predictive coding systems are used to produce lawyers’ professional judgment, we argue they must be designed for contestability— providing greater transparency, interaction, and configurability around embedded choices to ensure decisions about how to embed core professional judgments, such as relevance and proportionality, remain salient and demand engagement from lawyers, not just their technical experts.« less
  5. Teachable agents are pedagogical agents that employ the ‘learning-by-teaching’ strategy, which facilitates learning by encouraging students to construct explanations, reflect on misconceptions, and elaborate on what they know. Teachable agents present unique opportunities to maximize the benefits of a ‘learning-by-teaching’ experience. For example, teachable agents can provide socio-emotional support to learners, influencing learner self-efficacy and motivation, and increasing learning. Prior work has found that a teachable agent which engages learners socially through social dialogue and paraverbal adaptation on pitch can have positive effects on rapport and learning. In this work, we introduce Emma, a teachable robotic agent that can speak socially and adapt on both pitch and loudness. Based on the phenomenon of entrainment, multi-feature adaptation on tone and loudness has been found in human-human interactions to be highly correlated to learning and social engagement. In a study with 48 middle school participants, we performed a novel exploration of how multi-feature adaptation can influence learner rapport and learning as an independent social behavior and combined with social dialogue. We found significantly more rapport for Emma when the robot both adapted and spoke socially than when Emma only adapted and indications of a similar trend for learning. Additionally, it appears thatmore »an individual’s initial comfort level with robots may influence how they respond to such behavior, suggesting that for individuals who are more comfortable interacting with robots, social behavior may have a more positive influence.« less