skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: LIMIT: Learning Interfaces to Maximize Information Transfer
Robots can use auditory, visual, or haptic interfaces to convey information to human users. The way these interfaces select signals is typically pre-defined by the designer: for instance, a haptic wristband might vibrate when the robot is moving and squeeze when the robot stops. But different people interpret the same signals in different ways, so that what makes sense to one person might be confusing or unintuitive to another. In this paper we introduce a unified algorithmic formalism for learningco-adaptiveinterfaces fromscratch. Our method does not need to know the human’s task (i.e., what the human is using these signals for). Instead, our insight is that interpretable interfaces should select signals that maximizecorrelationbetween the human’s actions and the information the interface is trying to convey. Applying this insight we develop LIMIT: Learning Interfaces to Maximize Information Transfer. LIMIT optimizes a tractable, real-time proxy of information gain in continuous spaces. The first time a person works with our system the signals may appear random; but over repeated interactions the interface learns a one-to-one mapping between displayed signals and human responses. Our resulting approach is both personalized to the current user and not tied to any specific interface modality. We compare LIMIT to state-of-the-art baselines across controlled simulations, an online survey, and an in-person user study with auditory, visual, and haptic interfaces. Overall, our results suggest that LIMIT learns interfaces that enable users to complete the task more quickly and efficiently, and users subjectively prefer LIMIT to the alternatives. See videos here:https://youtu.be/IvQ3TM1_2fA.  more » « less
Award ID(s):
2129155 2129201
PAR ID:
10543643
Author(s) / Creator(s):
;
Publisher / Repository:
ACM Transactions on Human-Robot Interaction
Date Published:
Journal Name:
ACM Transactions on Human-Robot Interaction
ISSN:
2573-9522
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Robots should personalize how they perform tasks to match the needs of individual human users. Today’s robots achieve this personalization by asking for the human’s feedback in the task space. For example, an autonomous car might show the human two different ways to decelerate at stoplights, and ask the human which of these motions they prefer. This current approach to personalization isindirect: Based on the behaviors the human selects (e.g., decelerating slowly), the robot tries to infer their underlying preference (e.g., defensive driving). By contrast, our article develops a learning and interface-based approach that enables humans todirectlyindicate their desired style. We do this by learning an abstract, low-dimensional, and continuous canonical space from human demonstration data. Each point in the canonical space corresponds to a different style (e.g., defensive or aggressive driving), and users can directly personalize the robot’s behavior by simply clicking on a point. Given the human’s selection, the robot then decodes this canonical style across each task in the dataset—e.g., if the human selects a defensive style, the autonomous car personalizes its behavior to drive defensively when decelerating, passing other cars, or merging onto highways. We refer to our resulting approach as PECAN:Personalizing Robot Behaviors through a LearnedCanonical Space. Our simulations and user studies suggest that humans prefer using PECAN to directly personalize robot behavior (particularly when those users become familiar with PECAN), and that users find the learned canonical space to be intuitive and consistent. See videos here:https://youtu.be/wRJpyr23PKI. 
    more » « less
  2. Physical interaction between humans and robots can help robots learn to perform complex tasks. The robot arm gains information by observing how the human kinesthetically guides it throughout the task. While prior works focus on how the robot learns, it is equally important that this learning is transparent to the human teacher. Visual displays that show the robot’s uncertainty can potentially communicate this information; however, we hypothesize that visual feedback mechanisms miss out on the physical connection between the human and robot. In this work we present a soft haptic display that wraps around and conforms to the surface of a robot arm, adding a haptic signal at an existing point of contact without significantly affecting the interaction. We demonstrate how soft actuation creates a salient haptic signal while still allowing flexibility in device mounting. Using a psychophysics experiment, we show that users can accurately distinguish inflation levels of the wrapped display with an average Weber fraction of 11.4%. When we place the wrapped display around the arm of a robotic manipulator, users are able to interpret and leverage the haptic signal in sample robot learning tasks, improving identification of areas where the robot needs more training and enabling the user to provide better demonstrations. See videos of our device and user studies here: https://youtu.be/tX-2Tqeb9Nw 
    more » « less
  3. Assistive robot arms can help humans by partially automating their desired tasks. Consider an adult with motor impairments controlling an assistive robot arm to eat dinner. The robot can reduce the number of human inputs — and how precise those inputs need to be — by recognizing what the human wants (e.g., a fork) and assisting for that task (e.g., moving towards the fork). Prior research has largely focused on learning the human’s task and providing meaningful assistance. But as the robot learns and assists, we also need to ensure that the human understands the robot’s intent (e.g., does the human know the robot is reaching for a fork?). In this paper, we study the effects of communicating learned assistance from the robot back to the human operator. We do not focus on the specific interfaces used for communication. Instead, we develop experimental and theoretical models of a) how communication changes the way humans interact with assistive robot arms, and b) how robots can harness these changes to better align with the human’s intent. We first conduct online and in-person user studies where participants operate robots that provide partial assistance, and we measure how the human’s inputs change with and without communication. With communication, we find that humans are more likely to intervene when the robot incorrectly predicts their intent, and more likely to release control when the robot correctly understands their task. We then use these findings to modify an established robot learning algorithm so that the robot can correctly interpret the human’s inputs when communication is present. Our results from a second in-person user study suggest that this combination of communication and learning outperforms assistive systems that isolate either learning or communication. See videos here: https://youtu.be/BET9yuVTVU4 
    more » « less
  4. Humans can leverage physical interaction to teach robot arms. This physical interaction takes multiple forms depending on the task, the user, and what the robot has learned so far. State-of-the-art approaches focus on learning from a single modality, or combine some interaction types. Some methods do so by assuming that the robot has prior information about the features of the task and the reward structure. By contrast, in this article, we introduce an algorithmic formalism that unites learning from demonstrations, corrections, and preferences. Our approach makes no assumptions about the tasks the human wants to teach the robot; instead, we learn a reward model from scratch by comparing the human’s input to nearby alternatives, i.e., trajectories close to the human’s feedback. We first derive a loss function that trains an ensemble of reward models to match the human’s demonstrations, corrections, and preferences. The type and order of feedback is up to the human teacher: We enable the robot to collect this feedback passively or actively. We then apply constrained optimization to convert our learned reward into a desired robot trajectory. Through simulations and a user study, we demonstrate that our proposed approach more accurately learns manipulation tasks from physical human interaction than existing baselines, particularly when the robot is faced with new or unexpected objectives. Videos of our user study are available at https://youtu.be/FSUJsTYvEKU 
    more » « less
  5. Abstract This paper introduces an innovative and streamlined design of a robot, resembling a bicycle, created to effectively inspect a wide range of ferromagnetic structures, even those with intricate shapes. The key highlight of this robot lies in its mechanical simplicity coupled with remarkable agility. The locomotion strategy hinges on the arrangement of two magnetic wheels in a configuration akin to a bicycle, augmented by two independent steering actuators. This configuration grants the robot the exceptional ability to move in multiple directions. Moreover, the robot employs a reciprocating mechanism that allows it to alter its shape, thereby surmounting obstacles effortlessly. An inherent trait of the robot is its innate adaptability to uneven and intricate surfaces on steel structures, facilitated by a dynamic joint. To underscore its practicality, the robot's application is demonstrated through the utilization of an ultrasonic sensor for gauging steel thickness, coupled with a pragmatic deployment mechanism. By integrating a defect detection model based on deep learning, the robot showcases its proficiency in automatically identifying and pinpointing areas of rust on steel surfaces. The paper undertakes a thorough analysis, encompassing robot kinematics, adhesive force, potential sliding and turn‐over scenarios, and motor power requirements. These analyses collectively validate the stability and robustness of the proposed design. Notably, the theoretical calculations established in this study serve as a valuable blueprint for developing future robots tailored for climbing steel structures. To enhance its inspection capabilities, the robot is equipped with a camera that employs deep learning algorithms to detect rust visually. The paper substantiates its claims with empirical evidence, sharing results from extensive experiments and real‐world deployments on diverse steel bridges, situated in both Nevada and Georgia. These tests comprehensively affirm the robot's proficiency in adhering to surfaces, navigating challenging terrains, and executing thorough inspections. A comprehensive visual representation of the robot's trials and field deployments is presented in videos accessible at the following links:https://youtu.be/Qdh1oz_oxiQ andhttps://youtu.be/vFFq79O49dM. 
    more » « less