skip to main content


Title: When Humans Aren't Optimal: Robots that Collaborate with Risk-Aware Humans
Award ID(s):
1941722
NSF-PAR ID:
10208440
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Human-Robot Interaction
Page Range / eLocation ID:
43 to 52
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A hallmark of human intelligence is the ability to understand and influence other minds. Humans engage in inferential social learning (ISL) by using commonsense psychology to learn from others and help others learn. Recent advances in artificial intelligence (AI) are raising new questions about the feasibility of human–machine interactions that support such powerful modes of social learning. Here, we envision what it means to develop socially intelligent machines that can learn, teach, and communicate in ways that are characteristic of ISL. Rather than machines that simply predict human behaviours or recapitulate superficial aspects of human sociality (e.g. smiling, imitating), we should aim to build machines that can learn from human inputs and generate outputs for humans by proactively considering human values, intentions and beliefs. While such machines can inspire next-generation AI systems that learn more effectively from humans (as learners) and even help humans acquire new knowledge (as teachers), achieving these goals will also require scientific studies of its counterpart: how humans reason about machine minds and behaviours. We close by discussing the need for closer collaborations between the AI/ML and cognitive science communities to advance a science of both natural and artificial intelligence. This article is part of a discussion meeting issue ‘Cognitive artificial intelligence’. 
    more » « less
  2. null (Ed.)
  3. As robots become prevalent, merely thinking of their existence may affect how people behave. When interacting with a robot, people conformed to the robot’s answers more than to their own initial response [1]. In this study, we examined how robot affect conformity to other humans. We primed participants to think of different experiences: Humans (an experience with a human stranger), Robots (an experience with a robot), or Neutral (daily life). We then measured if participants conformed to other humans in survey answers. Results indicated that people conformed more when thinking of Humans or Robots than of Neutral events. This implies that robots have a similar effect on human conformity to other humans as human strangers do. 
    more » « less
  4. Metahuman systems are new, emergent, sociotechnical systems where machines that learn join human learning and create original systemic capabilities. Metahuman systems will change many facets of the way we think about organizations and work. They will push information systems research in new directions that may involve a revision of the field’s research goals, methods and theorizing. Information systems researchers can look beyond the capabilities and constraints of human learning toward hybrid human/machine learning systems that exhibit major differences in scale, scope and speed. We review how these changes influence organization design and goals. We identify four organizational level generic functions critical to organize metahuman systems properly: delegating, monitoring, cultivating, and reflecting. We show how each function raises new research questions for the field. We conclude by noting that improved understanding of metahuman systems will primarily come from learning-by-doing as information systems scholars try out new forms of hybrid learning in multiple settings to generate novel, generalizable, impactful designs. Such trials will result in improved understanding of metahuman systems. This need for large-scale experimentation will push many scholars out from their comfort zone, because it calls for the revitalization of action research programs that informed the first wave of socio-technical research at the dawn of automating work systems. 
    more » « less
  5. The local explanation provides heatmaps on images to explain how Convolutional Neural Networks (CNNs) derive their output. Due to its visual straightforwardness, the method has been one of the most popular explainable AI (XAI) methods for diagnosing CNNs. Through our formative study (S1), however, we captured ML engineers' ambivalent perspective about the local explanation as a valuable and indispensable envision in building CNNs versus the process that exhausts them due to the heuristic nature of detecting vulnerability. Moreover, steering the CNNs based on the vulnerability learned from the diagnosis seemed highly challenging. To mitigate the gap, we designed DeepFuse, the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations. DeepFuse helps CNN engineers to systemically search unreasonable local explanations and annotate the new boundaries for those identified as unreasonable in a labor-efficient manner. Next, it steers the model based on the given annotation such that the model doesn't introduce similar mistakes. We conducted a two-day study (S2) with 12 experienced CNN engineers. Using DeepFuse, participants made a more accurate and reasonable model than the current state-of-the-art. Also, participants found the way DeepFuse guides case-based reasoning can practically improve their current practice. We provide implications for design that explain how future HCI-driven design can move our practice forward to make XAI-driven insights more actionable.

     
    more » « less