Teleoperation enables controlling complex robot systems remotely, providing the ability to impart human expertise from a distance. However, these interfaces can be complicated to use as it is difficult to contextualize information about robot motion in the workspace from the limited camera feedback. Thus, it is required to study the best manner in which assistance can be provided to the operator that reduces interface complexity and effort required for teleoperation. Some techniques that provide assistance to the operator while freeform teleoperating include: (1) perception augmentation, like augmented reality visual cues and additional camera angles, increasing the information available to the operator; (2) action augmentation, like assistive autonomy and control augmentation, optimized to reduce the effort required by the operator while teleoperating. In this article, we investigate: (1) which aspects of dexterous telemanipulation require assistance; (2) the impact of perception and action augmentation in improving teleoperation performance; and (3) what factors impact the usage of assistance and how to tailor these interfaces based on the operators’ needs and characteristics. The findings from this user study and resulting post-study surveys will help identify task-based and user-preferred perception and augmentation features for teleoperation assistance. 
                        more » 
                        « less   
                    
                            
                            Ergodicity reveals assistance and learning from physical human-robot interaction
                        
                    
    
            This paper applies information theoretic principles to the investigation of physical human-robot interaction. Drawing from the study of human perception and neural encoding, information theoretic approaches offer a perspective that enables quantitatively interpreting the body as an information channel and bodily motion as an information-carrying signal. We show that ergodicity, which can be interpreted as the degree to which a trajectory encodes information about a task, correctly predicts changes due to reduction of a person’s existing deficit or the addition of algorithmic assistance. The measure also captures changes from training with robotic assistance. Other common measures for assessment failed to capture at least one of these effects. This information-based interpretation of motion can be applied broadly, in the evaluation and design of human-machine interactions, in learning by demonstration paradigms, or in human motion analysis. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1637764
- PAR ID:
- 10176439
- Date Published:
- Journal Name:
- Science Robotics
- Volume:
- 4
- Issue:
- 29
- ISSN:
- 2470-9476
- Page Range / eLocation ID:
- eaav6079
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Humans have an astonishing ability to extract hidden information from the movements of others. For example, even with limited kinematic information, humans can distinguish between biological and nonbiological motion, identify the age and gender of a human demonstrator, and recognize what action a human demonstrator is performing. It is unknown, however, whether they can also estimate hidden mechanical properties of another’s limbs simply by observing their motions. Strictly speaking, identifying an object’s mechanical properties, such as stiffness, requires contact. With only motion information, unambiguous measurements of stiffness are fundamentally impossible, since the same limb motion can be generated with an infinite number of stiffness values. However, we show that humans can readily estimate the stiffness of a simulated limb from its motion. In three experiments, we found that participants linearly increased their rating of arm stiffness as joint stiffness parameters in the arm controller increased. This was remarkable since there was no physical contact with the simulated limb. Moreover, participants had no explicit knowledge of how the simulated arm was controlled. To successfully map nontrivial changes in multijoint motion to changes in arm stiffness, participants likely drew on prior knowledge of human neuromotor control. Having an internal representation consistent with the behavior of the controller used to drive the simulated arm implies that this control policy competently captures key features of veridical biological control. Finding that humans can extract latent features of neuromotor control from kinematics also provides new insight into how humans interpret the motor actions of others. NEW & NOTEWORTHY Humans can visually perceive another’s overt motion, but it is unknown whether they can also perceive the hidden dynamic properties of another’s limbs from their motions. Here, we show that humans can correctly infer changes in limb stiffness from nontrivial changes in multijoint limb motion without force information or explicit knowledge of the underlying limb controller. Our findings suggest that humans presume others control motor behavior in such a way that limb stiffness influences motion.more » « less
- 
            Wagner, A.R.; null (Ed.)Collaborative robots that provide anticipatory assistance are able to help people complete tasks more quickly. As anticipatory assistance is provided before help is explicitly requested, there is a chance that this action itself will influence the person’s future decisions in the task. In this work, we investigate whether a robot’s anticipatory assistance can drive people to make choices different from those they would otherwise make. Such a study requires measuring intent, which itself could modify intent, resulting in an observer paradox. To combat this, we carefully designed an experiment to avoid this effect. We considered several mitigations such as the careful choice of which human behavioral signals we use to measure intent and designing unobtrusive ways to obtain these signals. We conducted a user study (𝑁=99) in which participants completed a collaborative object retrieval task: users selected an object and a robot arm retrieved it for them. The robot predicted the user’s object selection from eye gaze in advance of their explicit selection, and then provided either collaborative anticipation (moving toward the predicted object), adversarial anticipation (moving away from the predicted object), or no anticipation (no movement, control condition). We found trends and participant comments suggesting people’s decision making changes in the presence of a robot anticipatory motion and this change differs depending on the robot’s anticipation strategy.more » « less
- 
            Artificial intelligence (AI) has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human–AI collaboration continues to face several challenges. First, the conditions that support complementarity (i.e., situations in which the performance of a human with AI assistance exceeds the performance of an unassisted human or the AI in isolation) must be understood. This task requires humans to be able to recognize situations in which the AI should be leveraged and to develop new AI systems that can learn to complement the human decision-maker. Second, human mental models of the AI, which contain both expectations of the AI and reliance strategies, must be accurately assessed. Third, the effects of different design choices for human-AI interaction must be understood, including both the timing of AI assistance and the amount of model information that should be presented to the human decision-maker to avoid cognitive overload and ineffective reliance strategies. In response to each of these three challenges, we present an interdisciplinary perspective based on recent empirical and theoretical findings and discuss new research directions.more » « less
- 
            This paper proposes a novel stochastic-skill-level-based shared control framework to assist human novices to emulate human experts in complex dynamic control tasks. The proposed framework aims to infer stochastic-skill-levels (SSLs) of the human novices and provide personalized assistance based on the inferred SSLs. SSL can be assessed as a stochastic variable which denotes the probability that the novice will behave similarly to experts. We propose a data-driven method which can characterize novice demonstrations as a novice model and expert demonstrations as an expert model, respectively. Then, our SSL inference approach utilizes the novice and expert models to assess the SSL of the novices in complex dynamic control tasks. The shared control scheme is designed to dynamically adjust the level of assistance based on the inferred SSL to prevent frustration or tedium during human training due to poorly imposed assistance. The proposed framework is demonstrated by a human subject experiment in a human training scenario for a remotely piloted urban air mobility (UAM) vehicle. The results show that the proposed framework can assess the SSL and tailor the assistance for an individual in real-time. The proposed framework is compared to practice-only training (no assistance) and a baseline shared control approach to test the human learning rates in the designed training scenario with human subjects. A subjective survey is also examined to monitor the user experience of the proposed framework.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    