In this paper, a hybrid shared controller is proposed for assisting human novice users to emulate human expert users within a human-automation interaction framework. This work is motivated to let human novice users learn the skills of human expert users using automation as a medium. Automation interacts with human users in two folds: it learns how to optimally control the system from the experts demonstrations by offline computation, and assists the novice in real time without excess amount of intervention based on the inference of the novice’s skill-level within our properly designed shared controller. Automation takes more control authority when the novices skill-level is poor, or it allows the novice to have more control authority when his/her skill-level is close to that of the expert to let the novice learn from his/her own control experience. The proposed scheme is shown to be able to improve the system performance while minimizing the intervention from the automation, which is demonstrated via an illustrative human-in-the-loop application example. 
                        more » 
                        « less   
                    
                            
                            Stochastic-Skill-Level-Based Shared Control for Human Training in Urban Air Mobility Scenario
                        
                    
    
            This paper proposes a novel stochastic-skill-level-based shared control framework to assist human novices to emulate human experts in complex dynamic control tasks. The proposed framework aims to infer stochastic-skill-levels (SSLs) of the human novices and provide personalized assistance based on the inferred SSLs. SSL can be assessed as a stochastic variable which denotes the probability that the novice will behave similarly to experts. We propose a data-driven method which can characterize novice demonstrations as a novice model and expert demonstrations as an expert model, respectively. Then, our SSL inference approach utilizes the novice and expert models to assess the SSL of the novices in complex dynamic control tasks. The shared control scheme is designed to dynamically adjust the level of assistance based on the inferred SSL to prevent frustration or tedium during human training due to poorly imposed assistance. The proposed framework is demonstrated by a human subject experiment in a human training scenario for a remotely piloted urban air mobility (UAM) vehicle. The results show that the proposed framework can assess the SSL and tailor the assistance for an individual in real-time. The proposed framework is compared to practice-only training (no assistance) and a baseline shared control approach to test the human learning rates in the designed training scenario with human subjects. A subjective survey is also examined to monitor the user experience of the proposed framework. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1836952
- PAR ID:
- 10519273
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- ACM Transactions on Human-Robot Interaction
- ISSN:
- 2573-9522
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            In this paper, we propose a human-automation interaction scheme to improve the task performance of novice human users with different skill levels. The proposed scheme includes two interaction modes: learn from experts mode and assist novices mode. In the learn from experts mode, the automation learns from a human expert user such that the awareness of task objective is obtained. Based on the learned task objective, in the assist novices mode, the automation customizes its control parameter to assist a novice human user towards emulating the performance of the expert human user. We experimentally test the proposed human-automation scheme in a designed quadrotor simulation environment, and the results show that the proposed approach is capable of adapting to and assisting the novice human user to achieve the performance that emulates the expert human user.more » « less
- 
            Human-centered environments provide affordances for and require the use of two-handed, or bimanual, manipulations. Robots designed to function in, and physically interact with, these environments have not been able to meet these requirements because standard bimanual control approaches have not accommodated the diverse, dynamic, and intricate coordinations between two arms to complete bimanual tasks. In this work, we enabled robots to more effectively perform bimanual tasks by introducing a bimanual shared-control method. The control method moves the robot’s arms to mimic the operator’s arm movements but provides on-the-fly assistance to help the user complete tasks more easily. Our method used a bimanual action vocabulary, constructed by analyzing how people perform two-hand manipulations, as the core abstraction level for reasoning about how to assist in bimanual shared autonomy. The method inferred which individual action from the bimanual action vocabulary was occurring using a sequence-to-sequence recurrent neural network architecture and turned on a corresponding assistance mode, signals introduced into the shared-control loop designed to make the performance of a particular bimanual action easier or more efficient. We demonstrate the effectiveness of our method through two user studies that show that novice users could control a robot to complete a range of complex manipulation tasks more successfully using our method compared to alternative approaches. We discuss the implications of our findings for real-world robot control scenarios.more » « less
- 
            The mental demands associated with operating complex whole-body powered exoskeletons are poorly understood. This study aimed to explore the overall workload associated with using a powered wholebody exoskeleton among expert and novice users, as well as the changes in workload resulting from novices adapting to exoskeleton-use over time. We used eye-tracking measures to quantify the differences in workload of six novices and five experts while they performed a levelwalking task, with and without wearing a whole-body powered exoskeleton. We found that only novices’ pupil dilation (PD) increased, while experts showed a greater proportion of downward-directed pathfixations (PF) compared to novices while wearing the exoskeleton. These results indicate that novices’ mental demands were higher, and that experts and novices exhibited distinct visuomotor strategies. Eyetracking measures may potentially be used to detect differences in workload and skill-level associated with using exoskeletons, and also considered as inputs for future adaptive exoskeleton control algorithms.more » « less
- 
            Abstract Human–exoskeleton interactions have the potential to bring about changes in human behavior for physical rehabilitation or skill augmentation. Despite significant advances in the design and control of these robots, their application to human training remains limited. The key obstacles to the design of such training paradigms are the prediction of human–exoskeleton interaction effects and the selection of interaction control to affect human behavior. In this article, we present a method to elucidate behavioral changes in the human–exoskeleton system and identify expert behaviors correlated with a task goal. Specifically, we observe the joint coordinations of the robot, also referred to as kinematic coordination behaviors, that emerge from human–exoskeleton interaction during learning. We demonstrate the use of kinematic coordination behaviors with two task domains through a set of three human-subject studies. We find that participants (1) learn novel tasks within the exoskeleton environment, (2) demonstrate similarity of coordination during successful movements within participants, (3) learn to leverage these coordination behaviors to maximize success within participants, and (4) tend to converge to similar coordinations for a given task strategy across participants. At a high level, we identify task-specific joint coordinations that are used by different experts for a given task goal. These coordinations can be quantified by observing experts and the similarity to these coordinations can act as a measure of learning over the course of training for novices. The observed expert coordinations may further be used in the design of adaptive robot interactions aimed at teaching a participant the expert behaviors.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    