skip to main content


Title: Exploring Mixed Reality Robot Communication Under Different types of Mental Workload
This paper explores the tradeoffs between different types of mixed reality robotic communication under different levels of user workload. We present the results of a within-subjects experiment in which we systematically and jointly vary robot communication style alongside level and type of cognitive load, and measure subsequent impacts on accuracy, reaction time, and perceived workload and effectiveness. Our preliminary results suggest that although humans may not notice differences, the manner of load a user is under and the type of communication style used by a robot they interact with do in fact interact to determine their task effectiveness  more » « less
Award ID(s):
1909864 1823245
NSF-PAR ID:
10155101
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction
Volume:
3
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Despite the phenomenal advances in the computational power and functionality of electronic systems, human-machine interaction has largely been limited to simple control panels, keyboard, mouse and display. Consequently, these systems either rely critically on close human guidance or operate almost independently from the user. An exemplar technology integrated tightly into our lives is the smartphone. However, the term “smart” is a misnomer, since it has fundamentally no intelligence to understand its user. The users still have to type, touch or speak (to some extent) to express their intentions in a form accessible to the phone. Hence, intelligent decision making is still almost entirely a human task. A life-changing experience can be achieved by transforming machines from passive tools to agents capable of understanding human physiology and what their user wants [1]. This can advance human capabilities in unimagined ways by building a symbiotic relationship to solve real world problems cooperatively. One of the high-impact application areas of this approach is assistive internet of things (IoT) technologies for physically challenged individuals. The Annual World Report on Disability reveals that 15% of the world population lives with disability, while 110 to 190 million of these people have difficulty in functioning [1]. Quality of life for this population can improve significantly if we can provide accessibility to smart devices, which provide sensory inputs and assist with everyday tasks. This work demonstrates that smart IoT devices open up the possibility to alleviate the burden on the user by equipping everyday objects, such as a wheelchair, with decision-making capabilities. Moving part of the intelligent decision making to smart IoT objects requires a robust mechanism for human-machine communication (HMC). To address this challenge, we present examples of multimodal HMC mechanisms, where the modalities are electroencephalogram (EEG), speech commands, and motion sensing. We also introduce an IoT co-simulation framework developed using a network simulator (OMNeT++) and a robot simulation platform Virtual Robot Experimentation Platform (V-REP). We show how this framework is used to evaluate the effectiveness of different HMC strategies using automated indoor navigation as a driver application. 
    more » « less
  2. While High Performance Computing systems are increas-ingly based on heterogeneous cores, their effectiveness depends on howwell the scheduler can allocate workloads onto appropriate computing de-vices and how communication and computation can be overlapped. Withdifferent types of resources integrated into one system, the complexity ofthe scheduler correspondingly increases. Moreover, for applications withvarying problem sizes on different heterogeneous resources, the optimalscheduling approach may vary accordingly. We thus present PDAWL, anevent-driven profile-based Iterative Dynamic Adaptive Work-Load bal-ance scheduling approach to dynamically and adaptively adjust workloadto efficiently utilize heterogeneous resources. It combines online schedul-ing (DAWL), which can adaptively adjust workload based on availablereal time heterogeneous resources, with an offline machine learning (profile-based estimation model) which can build a device-specific communica-tion computation estimation model. Our scheduling approach is tested oncontrol-regular applications, Stencil kernel (based on a Jacobi Algorithm)and Sparse Matrix-Vector Multiplication (SpMV) in an event-driven run-time system. Experimental results show that PDAWL is either on-par orfar outperforms whichever yields the best results (CPU or GPU). 
    more » « less
  3. null (Ed.)
    Graph-based namespaces are being increasingly used to represent the organization of complex and ever-growing information eco-systems and individual user roles. Timely and accurate information dissemination requires an architecture with appropriate naming frameworks, adaptable to changing roles, focused on content rather than network addresses. Today's complex information organization structures make such dissemination very challenging. To address this, we propose POISE, a name-based publish/subscribe architecture for efficient topic-based and recipient-based content dissemination. POISE proposes an information layer, improving on state-of-the-art Information-Centric Networking solutions in two major ways: 1) support for complex graph-based namespaces, and 2) automatic name-based load-splitting. POISE supports in-network graph-based naming, leveraged in a dissemination protocol which exploits information layer rendezvous points (RPs) that perform name expansions. For improved robustness and scalability, POISE supports adaptive load-sharing via multiple RPs, each managing a dynamically chosen subset of the namespace graph. Excessive workload may cause one RP to turn into a ``hot spot'', impeding performance and reliability. To eliminate such traffic concentration, we propose an automated load-splitting mechanism, consisting of an enhanced, namespace graph partitioning complemented by a seamless, loss-less core migration procedure. Due to the nature of our graph partitioning and its complex objectives, off-the-shelf graph partitioning, e.g., METIS, is inadequate. We propose a hybrid, iterative bi-partitioning solution, consisting of an initial and a refinement phase. We also implemented POISE on a DPDK-based platform. Using the important application of emergency response, our experimental results show that POISE outperforms state-of-the-art solutions, demonstrating its effectiveness in timely delivery and load-sharing. 
    more » « less
  4. null (Ed.)
    The recent development of Robot-Assisted Minimally Invasive Surgery (RAMIS) has brought much benefit to ease the performance of complex Minimally Invasive Surgery (MIS) tasks and lead to more clinical outcomes. Compared to direct master-slave manipulation, semi-autonomous control for the surgical robot can enhance the efficiency of the operation, particularly for repetitive tasks. However, operating in a highly dynamic in-vivo environment is complex. Supervisory control functions should be included to ensure flexibility and safety during the autonomous control phase. This paper presents a haptic rendering interface to enable supervised semi-autonomous control for a surgical robot. Bayesian optimization is used to tune user-specific parameters during the surgical training process. User studies were conducted on a customized simulator for validation. Detailed comparisons are made between with and without the supervised semi-autonomous control mode in terms of the number of clutching events, task completion time, master robot end-effector trajectory and average control speed of the slave robot. The effectiveness of the Bayesian optimization is also evaluated, demonstrating that the optimized parameters can significantly improve users' performance. Results indicate that the proposed control method can reduce the operator's workload and enhance operation efficiency. 
    more » « less
  5. Motion tracking interfaces are intuitive for free-form teleoperation tasks. However, efficient manipulation control can be difficult with such interfaces because of issues like the interference of unintended motions and the limited precision of human motion control. The limitation in control efficiency reduces the operator's performance and increases their workload and frustration during robot teleoperation. To improve the efficiency, we proposed separating controlled degrees of freedom (DoFs) and adjusting the motion scaling ratio of a motion tracking interface. The motion tracking of handheld controllers from a Virtual Reality system was used for the interface. We separated the translation and rotational control into: 1) two controllers held in the dominant and non-dominant hands and 2) hand pose tracking and trackpad inputs of a controller. We scaled the control mapping ratio based on 1) the environmental constraints and 2) the teleoperator's control speed. We further conducted a user study to investigate the effectiveness of the proposed methods in increasing efficiency. Our results show that the separation of position and orientation control into two controllers and the environment-based scaling methods perform better than their alternatives. 
    more » « less