skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Disassociation of Visual-proprioception feedback to Enhance Endotracheal intubation,
This paper discusses the key elements of a research study that focused on training an important procedure called “Endotracheal intubation” to novice students. Such a procedure is a virtual part of treating patients who are infected with the covid-19 virus. A virtual reality environment was created to facilitate the training of novice nurses (or nurse trainees) using the HTC Vive platform. The primary interaction with the virtual objects inside this simulation-based training environment was using the hand controller. However, the small mouth of the virtual patient and the necessity of utilizing both hands to pick up the laryngoscope and endotracheal tube at the same time (during training), led to collisions involving the hand controllers and hampered the immersive experience of the participants. A multi-sensory conflict notion-based approach was proposed to address this problem. We used “Haptic retargeting” method to solve this issue. And we compared the result of the haptic retargeting method with reference condtion. Initial Results (through a questionnaire) suggest that this Haptic retargeting approach increases the participants’ sense of presence in the virtual environment.  more » « less
Award ID(s):
2106901
PAR ID:
10435921
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
2022 International Conference on Digital Transformation and Intelligence
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Physical human–robot interactions (pHRI) often provide mechanical force and power to aid walking without requiring voluntary effort from the human. Alternatively, principles of physical human–human interactions (pHHI) can inspire pHRI that aids walking by engaging human sensorimotor processes. We hypothesize that low-force pHHI can intuitively induce a person to alter their walking through haptic communication. In our experiment, an expert partner dancer influenced novice participants to alter step frequency solely through hand interactions. Without prior instruction, training, or knowledge of the expert’s goal, novices decreased step frequency 29% and increased step frequency 18% based on low forces (< 20 N) at the hand. Power transfer at the hands was 3–700 × smaller than what is necessary to propel locomotion, suggesting that hand interactions did not mechanically constrain the novice’s gait. Instead, the sign/direction of hand forces and power may communicate information about how to alter walking. Finally, the expert modulated her arm effective dynamics to match that of each novice, suggesting a bidirectional haptic communication strategy for pHRI that adapts to the human. Our results provide a framework for developing pHRI at the hand that may be applicable to assistive technology and physical rehabilitation, human-robot manufacturing, physical education, and recreation. 
    more » « less
  2. In this study, we developed a new haptic–mixed reality intravenous (HMR-IV) needle insertion simulation system, providing a bimanual haptic interface integrated into a mixed reality system with programmable variabilities considering real clinical environments. The system was designed for nursing students or healthcare professionals to practice IV needle insertion into a virtual arm with unlimited attempts under various changing insertion conditions (e.g., skin: color, texture, stiffness, friction; vein: size, shape, location depth, stiffness, friction). To achieve accurate hand–eye coordination under dynamic mixed reality scenarios, two different haptic devices (Dexmo and Geomagic Touch) and a standalone mixed reality system (HoloLens 2) were integrated and synchronized through multistep calibration for different coordinate systems (real world, virtual world, mixed reality world, haptic interface world, HoloLens camera). In addition, force-profile-based haptic rendering proposed in this study was able to successfully mimic the real tactile feeling of IV needle insertion. Further, a global hand-tracking method, combining two depth sensors (HoloLens and Leap Motion), was developed to accurately track a haptic glove and simulate grasping a virtual hand with force feedback. We conducted an evaluation study with 20 participants (9 experts and 11 novices) to measure the usability of the HMR-IV simulation system with user performance under various insertion conditions. The quantitative results from our own metric and qualitative results from the NASA Task Load Index demonstrate the usability of our system. 
    more » « less
  3. Current commercially available robotic minimally invasive surgery (RMIS) platforms provide no haptic feedback of tool interactions with the surgical environment. As a consequence, novice robotic surgeons must rely exclusively on visual feedback to sense their physical interactions with the surgical environment. This technical limitation can make it challenging and time-consuming to train novice surgeons to proficiency in RMIS. Extensive prior research has demonstrated that incorporating haptic feedback is effective at improving surgical training task performance. However, few studies have investigated the utility of providing feedback of multiple modalities of haptic feedback simultaneously (multi-modality haptic feedback) in this context, and these studies have presented mixed results regarding its efficacy. Furthermore, the inability to generalize and compare these mixed results has limited our ability to understand why they can vary significantly between studies. Therefore, we have developed a generalized, modular multi-modality haptic feedback and data acquisition framework leveraging the real-time data acquisition and streaming capabilities of the Robot Operating System (ROS). In our preliminary study using this system, participants complete a peg transfer task using a da Vinci robot while receiving haptic feedback of applied forces, contact accelerations, or both via custom wrist-worn haptic devices. Results highlight the capability of our system in running systematic comparisons between various single and dual-modality haptic feedback approaches. 
    more » « less
  4. null (Ed.)
    Technological advancements and increased access have prompted the adoption of head- mounted display based virtual reality (VR) for neuroscientific research, manual skill training, and neurological rehabilitation. Applications that focus on manual interaction within the virtual environment (VE), especially haptic-free VR, critically depend on virtual hand-object collision detection. Knowledge about how multisensory integration related to hand-object collisions affects perception-action dynamics and reach-to-grasp coordination is needed to enhance the immersiveness of interactive VR. Here, we explored whether and to what extent sensory substitution for haptic feedback of hand-object collision (visual, audio, or audiovisual) and collider size (size of spherical pointers representing the fingertips) influences reach-to-grasp kinematics. In Study 1, visual, auditory, or combined feedback were compared as sensory substitutes to indicate the successful grasp of a virtual object during reach-to-grasp actions. In Study 2, participants reached to grasp virtual objects using spherical colliders of different diameters to test if virtual collider size impacts reach-to-grasp. Our data indicate that collider size but not sensory feedback modality significantly affected the kinematics of grasping. Larger colliders led to a smaller size-normalized peak aperture. We discuss this finding in the context of a possible influence of spherical collider size on the perception of the virtual object’s size and hence effects on motor planning of reach-to-grasp. Critically, reach-to-grasp spatiotemporal coordination patterns were robust to manipulations of sensory feedback modality and spherical collider size, suggesting that the nervous system adjusted the reach (transport) component commensurately to the changes in the grasp (aperture) component. These results have important implications for research, commercial, industrial, and clinical applications of VR. 
    more » « less
  5. Best Paper Award. When estimating the distance or size of an object in the real world, we often use our own body as a metric; this strategy is called body-based scaling. However, object size estimation in a virtual environment presented via a head-mounted display differs from the physical world due to technical limitations such as narrow field of view and low fidelity of the virtual body when compared to one's real body. In this paper, we focus on increasing the fidelity of a participant's body representation in virtual environments with a personalized hand using personalized characteristics and a visually faithful augmented virtuality approach. To investigate the impact of the personalized hand, we compared it against a generic virtual hand and measured effects on virtual body ownership, spatial presence, and object size estimation. Specifically, we asked participants to perform a perceptual matching task that was based on scaling a virtual box on a table in front of them. Our results show that the personalized hand not only increased virtual body ownership and spatial presence, but also supported participants in correctly estimating the size of a virtual object in the proximity of their hand. 
    more » « less