skip to main content


Title: Disassociation of Visual-proprioception feedback to Enhance Endotracheal intubation,
This paper discusses the key elements of a research study that focused on training an important procedure called “Endotracheal intubation” to novice students. Such a procedure is a virtual part of treating patients who are infected with the covid-19 virus. A virtual reality environment was created to facilitate the training of novice nurses (or nurse trainees) using the HTC Vive platform. The primary interaction with the virtual objects inside this simulation-based training environment was using the hand controller. However, the small mouth of the virtual patient and the necessity of utilizing both hands to pick up the laryngoscope and endotracheal tube at the same time (during training), led to collisions involving the hand controllers and hampered the immersive experience of the participants. A multi-sensory conflict notion-based approach was proposed to address this problem. We used “Haptic retargeting” method to solve this issue. And we compared the result of the haptic retargeting method with reference condtion. Initial Results (through a questionnaire) suggest that this Haptic retargeting approach increases the participants’ sense of presence in the virtual environment.  more » « less
Award ID(s):
2106901
NSF-PAR ID:
10435921
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
2022 International Conference on Digital Transformation and Intelligence
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Technological advancements and increased access have prompted the adoption of head- mounted display based virtual reality (VR) for neuroscientific research, manual skill training, and neurological rehabilitation. Applications that focus on manual interaction within the virtual environment (VE), especially haptic-free VR, critically depend on virtual hand-object collision detection. Knowledge about how multisensory integration related to hand-object collisions affects perception-action dynamics and reach-to-grasp coordination is needed to enhance the immersiveness of interactive VR. Here, we explored whether and to what extent sensory substitution for haptic feedback of hand-object collision (visual, audio, or audiovisual) and collider size (size of spherical pointers representing the fingertips) influences reach-to-grasp kinematics. In Study 1, visual, auditory, or combined feedback were compared as sensory substitutes to indicate the successful grasp of a virtual object during reach-to-grasp actions. In Study 2, participants reached to grasp virtual objects using spherical colliders of different diameters to test if virtual collider size impacts reach-to-grasp. Our data indicate that collider size but not sensory feedback modality significantly affected the kinematics of grasping. Larger colliders led to a smaller size-normalized peak aperture. We discuss this finding in the context of a possible influence of spherical collider size on the perception of the virtual object’s size and hence effects on motor planning of reach-to-grasp. Critically, reach-to-grasp spatiotemporal coordination patterns were robust to manipulations of sensory feedback modality and spherical collider size, suggesting that the nervous system adjusted the reach (transport) component commensurately to the changes in the grasp (aperture) component. These results have important implications for research, commercial, industrial, and clinical applications of VR. 
    more » « less
  2. null (Ed.)
    An important problem in designing human-robot systems is the integration of human intent and performance in the robotic control loop, especially in complex tasks. Bimanual coordination is a complex human behavior that is critical in many fine motor tasks, including robot-assisted surgery. To fully leverage the capabilities of the robot as an intelligent and assistive agent, online recognition of bimanual coordination could be important. Robotic assistance for a suturing task, for example, will be fundamentally different during phases when the suture is wrapped around the instrument (i.e., making a c- loop), than when the ends of the suture are pulled apart. In this study, we develop an online recognition method of bimanual coordination modes (i.e., the directions and symmetries of right and left hand movements) using geometric descriptors of hand motion. We (1) develop this framework based on ideal trajectories obtained during virtual 2D bimanual path following tasks performed by human subjects operating Geomagic Touch haptic devices, (2) test the offline recognition accuracy of bi- manual direction and symmetry from human subject movement trials, and (3) evalaute how the framework can be used to characterize 3D trajectories of the da Vinci Surgical System’s surgeon-side manipulators during bimanual surgical training tasks. In the human subject trials, our geometric bimanual movement classification accuracy was 92.3% for movement direction (i.e., hands moving together, parallel, or away) and 86.0% for symmetry (e.g., mirror or point symmetry). We also show that this approach can be used for online classification of different bimanual coordination modes during needle transfer, making a C loop, and suture pulling gestures on the da Vinci system, with results matching the expected modes. Finally, we discuss how these online estimates are sensitive to task environment factors and surgeon expertise, and thus inspire future work that could leverage adaptive control strategies to enhance user skill during robot-assisted surgery. 
    more » « less
  3. Abstract

    Physical human–robot interactions (pHRI) often provide mechanical force and power to aid walking without requiring voluntary effort from the human. Alternatively, principles of physical human–human interactions (pHHI) can inspire pHRI that aids walking by engaging human sensorimotor processes. We hypothesize that low-force pHHI can intuitively induce a person to alter their walking through haptic communication. In our experiment, an expert partner dancer influenced novice participants to alter step frequency solely through hand interactions. Without prior instruction, training, or knowledge of the expert’s goal, novices decreased step frequency 29% and increased step frequency 18% based on low forces (< 20 N) at the hand. Power transfer at the hands was 3–700 × smaller than what is necessary to propel locomotion, suggesting that hand interactions did not mechanically constrain the novice’s gait. Instead, the sign/direction of hand forces and power may communicate information about how to alter walking. Finally, the expert modulated her arm effective dynamics to match that of each novice, suggesting a bidirectional haptic communication strategy for pHRI that adapts to the human. Our results provide a framework for developing pHRI at the hand that may be applicable to assistive technology and physical rehabilitation, human-robot manufacturing, physical education, and recreation.

     
    more » « less
  4. Current commercially available robotic minimally invasive surgery (RMIS) platforms provide no haptic feedback of tool interactions with the surgical environment. As a consequence, novice robotic surgeons must rely exclusively on visual feedback to sense their physical interactions with the surgical environment. This technical limitation can make it challenging and time-consuming to train novice surgeons to proficiency in RMIS. Extensive prior research has demonstrated that incorporating haptic feedback is effective at improving surgical training task performance. However, few studies have investigated the utility of providing feedback of multiple modalities of haptic feedback simultaneously (multi-modality haptic feedback) in this context, and these studies have presented mixed results regarding its efficacy. Furthermore, the inability to generalize and compare these mixed results has limited our ability to understand why they can vary significantly between studies. Therefore, we have developed a generalized, modular multi-modality haptic feedback and data acquisition framework leveraging the real-time data acquisition and streaming capabilities of the Robot Operating System (ROS). In our preliminary study using this system, participants complete a peg transfer task using a da Vinci robot while receiving haptic feedback of applied forces, contact accelerations, or both via custom wrist-worn haptic devices. Results highlight the capability of our system in running systematic comparisons between various single and dual-modality haptic feedback approaches. 
    more » « less
  5. Exoskeleton as a human augmentation technology has shown a great potential for transforming the future civil engineering operations. However, the inappropriate use of exoskeleton could cause injuries and damages if the user is not well trained. An effective procedural and operational training will make users more aware of the capabilities, restrictions and risks associated with exoskeleton in civil engineering operations. At present, the low availability and high cost of exoskeleton systems make hands-on training less feasible. In addition, different designs of exoskeleton correspond with different activation procedures, muscular engagement and motion boundaries, posing further challenges to exoskeleton training. We propose an “sensation transfer” approach that migrates the physical experience of wearing a real exoskeleton system to first-time users via a passive haptic system in an immersive virtual environment. The body motion and muscular engagement data of 15 experienced exoskeleton users were recorded and replayed in a virtual reality environment. Then a set of haptic devices on key parts of the body (shoulders, elbows, hands, and waist) generate different patterns of haptic cues depending on the trainees’ accuracy of mimicking the actions. The sensation transfer method will enhance the haptic learning experience and therefore accelerate the training. 
    more » « less