- Award ID(s):
- 1830163
- NSF-PAR ID:
- 10387801
- Date Published:
- Journal Name:
- IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- Page Range / eLocation ID:
- 628 to 633
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Despite non-co-location, haptic stimulation at the wrist can potentially provide feedback regarding interactions at the fingertips without encumbering the user’s hand. Here we investigate how two types of skin deformation at the wrist (normal and shear) relate to the perception of the mechanical properties of virtual objects. We hypothesized that a congruent mapping (i.e. when the most relevant interaction forces during a virtual interaction spatially match the haptic feedback at the wrist) would result in better perception than other map- pings.We performed an experiment where haptic devices at the wrist rendered either normal or shear feedback during manipulation of virtual objects with varying stiffness, mass, or friction properties. Perception of mechanical properties was more accurate with congruent skin stimulation than noncongruent. In addition, discrimination performance and subjective reports were positively influenced by congruence. This study demonstrates that users can perceive mechanical properties via haptic feedback provided at the wrist with a consistent mapping between haptic feedback and interaction forces at the fingertips, regardless of congruence.more » « less
-
Recent advances in extended reality (XR) technologies make seeing and hearing virtual objects commonplace, yet strategies for synthesizing haptic interactions with virtual objects continue to be limited. Two design principles govern the rendering of believable and intuitive haptic feedback: movement through open space must feel “free” while contact with virtual objects must feel stiff. Herein, a novel multisensory approach that conveys proprioception and effort through illusory visual feedback and refers to the wrist, via a bracelet interface, discrete and continuous interaction forces that would otherwise occur at the hands and fingertips, is presented. Results demonstrate that users reliably discriminate the stiffness of virtual buttons when provided with multisensory pseudohaptic feedback, comprising tactile pseudohaptic feedback (discrete vibrotactile feedback and continuous squeeze cues in a bracelet interface) and visual pseudohaptic illusions of touch interactions. Compared to the use of tactile or visual pseudohaptic feedback alone, multisensory pseudohaptic feedback expands the range of physical stiffnesses that are intuitively associated with the rendered virtual interactions and reduces individual differences in physical‐to‐virtual stiffness mappings. This multisensory approach, which leaves users' hands unencumbered, provides a flexible framework for synthesizing a wide array of touch‐enabled interactions in XR, with great potential for enhancing user experiences.
-
Current commercially available robotic minimally invasive surgery (RMIS) platforms provide no haptic feedback of tool interactions with the surgical environment. As a consequence, novice robotic surgeons must rely exclusively on visual feedback to sense their physical interactions with the surgical environment. This technical limitation can make it challenging and time-consuming to train novice surgeons to proficiency in RMIS. Extensive prior research has demonstrated that incorporating haptic feedback is effective at improving surgical training task performance. However, few studies have investigated the utility of providing feedback of multiple modalities of haptic feedback simultaneously (multi-modality haptic feedback) in this context, and these studies have presented mixed results regarding its efficacy. Furthermore, the inability to generalize and compare these mixed results has limited our ability to understand why they can vary significantly between studies. Therefore, we have developed a generalized, modular multi-modality haptic feedback and data acquisition framework leveraging the real-time data acquisition and streaming capabilities of the Robot Operating System (ROS). In our preliminary study using this system, participants complete a peg transfer task using a da Vinci robot while receiving haptic feedback of applied forces, contact accelerations, or both via custom wrist-worn haptic devices. Results highlight the capability of our system in running systematic comparisons between various single and dual-modality haptic feedback approaches.more » « less
-
People use their hands for intricate tasks like playing musical instruments, employing myriad touch sensations to inform motor control. In contrast, current prosthetic hands lack comprehensive haptic feedback and exhibit rudimentary multitasking functionality. Limited research has explored the potential of upper limb amputees to feel, perceive, and respond to multiple channels of simultaneously activated haptic feedback to concurrently control the individual fingers of dexterous prosthetic hands. This study introduces a novel control architecture for three amputees and nine additional subjects to concurrently control individual fingers of an artificial hand using two channels of context-specific haptic feedback. Artificial neural networks (ANNs) recognize subjects’ electromyogram (EMG) patterns governing the artificial hand controller. ANNs also classify the directions objects slip across tactile sensors on the robotic fingertips, which are encoded via the vibration frequency of wearable vibrotactile actuators. Subjects implement control strategies with each finger simultaneously to prevent or permit slip as desired, achieving a 94.49% ± 8.79% overall success rate. Although no statistically significant difference exists between amputees’ and non-amputees’ success rates, amputees require more time to respond to simultaneous haptic feedback signals, suggesting a higher cognitive load. Nevertheless, amputees can accurately interpret multiple channels of nuanced haptic feedback to concurrently control individual robotic fingers, addressing the challenge of multitasking with dexterous prosthetic hands.
-
In this work, we investigate the influence of different visualizations on a manipulation task in virtual reality (VR). Without the haptic feedback of the real world, grasping in VR might result in intersections with virtual objects. As people are highly sensitive when it comes to perceiving collisions, it might look more appealing to avoid intersections and visualize non-colliding hand motions. However, correcting the position of the hand or fingers results in a visual-proprioceptive discrepancy and must be used with caution. Furthermore, the lack of haptic feedback in the virtual world might result in slower actions as a user might not know exactly when a grasp has occurred. This reduced performance could be remediated with adequate visual feedback. In this study, we analyze the performance, level of ownership, and user preference of eight different visual feedback techniques for virtual grasping. Three techniques show the tracked hand (with or without grasping feedback), even if it intersects with the grasped object. Another three techniques display a hand without intersections with the object, called outer hand, simulating the look of a real world interaction. One visualization is a compromise between the two groups, showing both a primary outer hand and a secondary tracked hand. Finally, in the last visualization the hand disappears during the grasping activity. In an experiment, users perform a pick-and-place task for each feedback technique. We use high fidelity marker-based hand tracking to control the virtual hands in real time. We found that the tracked hand visualizations result in better performance, however, the outer hand visualizations were preferred. We also find indications that ownership is higher with the outer hand visualizations.more » « less