Robots can use auditory, visual, or haptic interfaces to convey information to human users. The way these interfaces select signals is typically pre-defined by the designer: for instance, a haptic wristband might vibrate when the robot is moving and squeeze when the robot stops. But different people interpret the same signals in different ways, so that what makes sense to one person might be confusing or unintuitive to another. In this paper we introduce a unified algorithmic formalism for learningco-adaptiveinterfaces fromscratch. Our method does not need to know the human’s task (i.e., what the human is using these signals for). Instead, our insight is that interpretable interfaces should select signals that maximizecorrelationbetween the human’s actions and the information the interface is trying to convey. Applying this insight we develop LIMIT: Learning Interfaces to Maximize Information Transfer. LIMIT optimizes a tractable, real-time proxy of information gain in continuous spaces. The first time a person works with our system the signals may appear random; but over repeated interactions the interface learns a one-to-one mapping between displayed signals and human responses. Our resulting approach is both personalized to the current user and not tied to any specific interface modality. We compare LIMIT to state-of-the-art baselines across controlled simulations, an online survey, and an in-person user study with auditory, visual, and haptic interfaces. Overall, our results suggest that LIMIT learns interfaces that enable users to complete the task more quickly and efficiently, and users subjectively prefer LIMIT to the alternatives. See videos here:https://youtu.be/IvQ3TM1_2fA. 
                        more » 
                        « less   
                    This content will become publicly available on December 31, 2026
                            
                            PECAN: Personalizing Robot Behaviors through a Learned Canonical Space
                        
                    
    
            Robots should personalize how they perform tasks to match the needs of individual human users. Today’s robots achieve this personalization by asking for the human’s feedback in the task space. For example, an autonomous car might show the human two different ways to decelerate at stoplights, and ask the human which of these motions they prefer. This current approach to personalization isindirect: Based on the behaviors the human selects (e.g., decelerating slowly), the robot tries to infer their underlying preference (e.g., defensive driving). By contrast, our article develops a learning and interface-based approach that enables humans todirectlyindicate their desired style. We do this by learning an abstract, low-dimensional, and continuous canonical space from human demonstration data. Each point in the canonical space corresponds to a different style (e.g., defensive or aggressive driving), and users can directly personalize the robot’s behavior by simply clicking on a point. Given the human’s selection, the robot then decodes this canonical style across each task in the dataset—e.g., if the human selects a defensive style, the autonomous car personalizes its behavior to drive defensively when decelerating, passing other cars, or merging onto highways. We refer to our resulting approach as PECAN:Personalizing Robot Behaviors through a LearnedCanonical Space. Our simulations and user studies suggest that humans prefer using PECAN to directly personalize robot behavior (particularly when those users become familiar with PECAN), and that users find the learned canonical space to be intuitive and consistent. See videos here:https://youtu.be/wRJpyr23PKI. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2246446
- PAR ID:
- 10631122
- Publisher / Repository:
- ACM Transactions on Human-Robot Interaction
- Date Published:
- Journal Name:
- ACM Transactions on Human-Robot Interaction
- Volume:
- 14
- Issue:
- 4
- ISSN:
- 2573-9522
- Page Range / eLocation ID:
- 1 to 33
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Reguera, Gemma (Ed.)ABSTRACT Rhodopseudomonas palustrisis renowned for its metabolic versatility, supported by a genome that contains pathways for numerous processes. In a recent study, Oda et al. (Appl Environ Microbiol 91:e02056-24, 2025,https://doi.org/10.1128/aem.02056-24) examineR. palustrisDSM127, a strain with a dramatically reduced gene inventory, to study how environmental pressures have influenced gene loss. Missing more than a quarter of its genome reduces its versatility. However, it may improve its efficiency under the conditions in which it still thrives, enhancing its aptitude as a chassis for biotechnology.more » « less
- 
            Assistive robot arms can help humans by partially automating their desired tasks. Consider an adult with motor impairments controlling an assistive robot arm to eat dinner. The robot can reduce the number of human inputs — and how precise those inputs need to be — by recognizing what the human wants (e.g., a fork) and assisting for that task (e.g., moving towards the fork). Prior research has largely focused on learning the human’s task and providing meaningful assistance. But as the robot learns and assists, we also need to ensure that the human understands the robot’s intent (e.g., does the human know the robot is reaching for a fork?). In this paper, we study the effects of communicating learned assistance from the robot back to the human operator. We do not focus on the specific interfaces used for communication. Instead, we develop experimental and theoretical models of a) how communication changes the way humans interact with assistive robot arms, and b) how robots can harness these changes to better align with the human’s intent. We first conduct online and in-person user studies where participants operate robots that provide partial assistance, and we measure how the human’s inputs change with and without communication. With communication, we find that humans are more likely to intervene when the robot incorrectly predicts their intent, and more likely to release control when the robot correctly understands their task. We then use these findings to modify an established robot learning algorithm so that the robot can correctly interpret the human’s inputs when communication is present. Our results from a second in-person user study suggest that this combination of communication and learning outperforms assistive systems that isolate either learning or communication. See videos here: https://youtu.be/BET9yuVTVU4more » « less
- 
            We propose a new synthesis algorithm that canefficientlysearch programs withlocalvariables (e.g., those introduced by lambdas). Prior bottom-up synthesis algorithms are not able to evaluate programs withfree local variables, and therefore cannot effectively reduce the search space of such programs (e.g., using standard observational equivalence reduction techniques), making synthesis slow. Our algorithm can reduce the space of programs with local variables. The key idea, dubbedlifted interpretation, is to lift up the program interpretation process, from evaluating one program at a time to simultaneously evaluating all programs from a grammar. Lifted interpretation provides a mechanism to systematically enumerate all binding contexts for local variables, thereby enabling us to evaluate and reduce the space of programs with local variables. Our ideas are instantiated in the domain of web automation. The resulting tool,Arborist, can automate a significantly broader range of challenging tasks more efficiently than state-of-the-art techniques includingWebRobotand Helena.more » « less
- 
            Abstract This paper introduces an innovative and streamlined design of a robot, resembling a bicycle, created to effectively inspect a wide range of ferromagnetic structures, even those with intricate shapes. The key highlight of this robot lies in its mechanical simplicity coupled with remarkable agility. The locomotion strategy hinges on the arrangement of two magnetic wheels in a configuration akin to a bicycle, augmented by two independent steering actuators. This configuration grants the robot the exceptional ability to move in multiple directions. Moreover, the robot employs a reciprocating mechanism that allows it to alter its shape, thereby surmounting obstacles effortlessly. An inherent trait of the robot is its innate adaptability to uneven and intricate surfaces on steel structures, facilitated by a dynamic joint. To underscore its practicality, the robot's application is demonstrated through the utilization of an ultrasonic sensor for gauging steel thickness, coupled with a pragmatic deployment mechanism. By integrating a defect detection model based on deep learning, the robot showcases its proficiency in automatically identifying and pinpointing areas of rust on steel surfaces. The paper undertakes a thorough analysis, encompassing robot kinematics, adhesive force, potential sliding and turn‐over scenarios, and motor power requirements. These analyses collectively validate the stability and robustness of the proposed design. Notably, the theoretical calculations established in this study serve as a valuable blueprint for developing future robots tailored for climbing steel structures. To enhance its inspection capabilities, the robot is equipped with a camera that employs deep learning algorithms to detect rust visually. The paper substantiates its claims with empirical evidence, sharing results from extensive experiments and real‐world deployments on diverse steel bridges, situated in both Nevada and Georgia. These tests comprehensively affirm the robot's proficiency in adhering to surfaces, navigating challenging terrains, and executing thorough inspections. A comprehensive visual representation of the robot's trials and field deployments is presented in videos accessible at the following links:https://youtu.be/Qdh1oz_oxiQ andhttps://youtu.be/vFFq79O49dM.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
