skip to main content

Title: A Haptic Interface for the Teleoperation of Extensible Continuum Manipulators
We describe a novel haptic interface designed specifically for the teleoperation of extensible continuum manipulators. The proposed device is based off of, and extends to the haptic domain, a kinematically similar input device for continuum manipulators called the MiniOct. This letter describes the physical design of the new device, the method of creating impedance-type haptic feedback to users, and some of the requirements for implementing this device in a bilateral teleoperation scheme for continuum robots. We report a series of initial experiments to validate the operation of the system, including simulated and real-time conditions. The experimental results show that a user can identify the direction of planar obstacles from the feedback for both virtual and physical environments. Finally, we discuss the challenges for providing feedback to an operator about the state of a teleoperated continuum manipulator.
Authors:
Award ID(s):
1718075
Publication Date:
NSF-PAR ID:
10169069
Journal Name:
IEEE robotics automation letters
Volume:
5
Issue:
2
Page Range or eLocation-ID:
1875-1882
ISSN:
2377-3766
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a novel haptic teleoperation approach that considers not only the safety but also the stability of a teleoperation system. Specifically, we build upon previous work on haptic shared control, which generates a reference haptic feedback that helps the human operator to safely navigate the robot but without taking away their control authority. Crucially, in this approach the force rendered to the user is not directly reflected in the motion of the robot (which is still directly controlled by the user); however, previous work in the area neglected to consider the possible instabilities in feedback loop generated by amore »user that over-responds to the haptic force. In this paper we introduce a differential constraint on the rendered force that makes the system finite-gain L2 stable; the constraint results in a Quadratically Constrained Quadratic Program (QCQP), for which we provide a closed-form solution. Our constraint is related to, but less restrictive than, the typical passivity constraint used in previous literature. We conducted an experimental simulation in which a human operator flies a UAV near an obstacle to evaluate the proposed method.« less
  2. Physical interaction between humans and robots can help robots learn to perform complex tasks. The robot arm gains information by observing how the human kinesthetically guides it throughout the task. While prior works focus on how the robot learns, it is equally important that this learning is transparent to the human teacher. Visual displays that show the robot’s uncertainty can potentially communicate this information; however, we hypothesize that visual feedback mechanisms miss out on the physical connection between the human and robot. In this work we present a soft haptic display that wraps around and conforms to the surface ofmore »a robot arm, adding a haptic signal at an existing point of contact without significantly affecting the interaction. We demonstrate how soft actuation creates a salient haptic signal while still allowing flexibility in device mounting. Using a psychophysics experiment, we show that users can accurately distinguish inflation levels of the wrapped display with an average Weber fraction of 11.4%. When we place the wrapped display around the arm of a robotic manipulator, users are able to interpret and leverage the haptic signal in sample robot learning tasks, improving identification of areas where the robot needs more training and enabling the user to provide better demonstrations. See videos of our device and user studies here: https://youtu.be/tX-2Tqeb9Nw« less
  3. A bilateral teleoperated rehabilitation cycling system is developed for people with movement impairments due to various neurological disorders. A master hand-cycling device is used by the operator to set the desired position and cadence of a lower-body functional electrical stimulation (FES) controlled and motor assisted recumbent cycle. The master device also uses kinematic haptic feedback to reflect the lower-body cycle's dynamic response to the operator. To accommodate for the unknown nonlinear dynamics inherent to physical human machine interaction (pHMI), admittance controllers were developed to indirectly track desired interaction torques for both the haptic feedback device and the lower-body cycle. Amore »robust position and cadence controller, which is only active within the regions of the crank cycle where FES produces sufficient torque values, was used to determine the FES intensity. A Lyapunov analysis is used to prove the robust FES controller yields global exponential tracking to the desired position and cadence set by the master device within FES stimulation regions. Outside of the FES regions, the admittance controllers at the hands and legs work in conjunction to produce desired performance. Both admittance controllers were analyzed for the entire crank cycle, and found to be input/output strictly passive and globally exponentially stable in the absence of human effort, despite the uncertain nonlinear dynamics.« less
  4. In this paper, we design and evaluate a novel form of visually-simulated haptic feedback cue for communicating weight in robot teleoperation. We propose that a visuo-proprioceptive cue results from inconsistencies created between the user's visual and proprioceptive senses when the robot's movement differs from the movement of the user's input. In a user study where participants teleoperate a six-DoF robot arm, we demonstrate the feasibility of using such a cue for communicating weight in four telemanipulation tasks to enhance user experience and task performance.
  5. The goal of this article is to enable robots to perform robust task execution following human instructions in partially observable environments. A robot’s ability to interpret and execute commands is fundamentally tied to its semantic world knowledge. Commonly, robots use exteroceptive sensors, such as cameras or LiDAR, to detect entities in the workspace and infer their visual properties and spatial relationships. However, semantic world properties are often visually imperceptible. We posit the use of non-exteroceptive modalities including physical proprioception, factual descriptions, and domain knowledge as mechanisms for inferring semantic properties of objects. We introduce a probabilistic model that fuses linguisticmore »knowledge with visual and haptic observations into a cumulative belief over latent world attributes to infer the meaning of instructions and execute the instructed tasks in a manner robust to erroneous, noisy, or contradictory evidence. In addition, we provide a method that allows the robot to communicate knowledge dissonance back to the human as a means of correcting errors in the operator’s world model. Finally, we propose an efficient framework that anticipates possible linguistic interactions and infers the associated groundings for the current world state, thereby bootstrapping both language understanding and generation. We present experiments on manipulators for tasks that require inference over partially observed semantic properties, and evaluate our framework’s ability to exploit expressed information and knowledge bases to facilitate convergence, and generate statements to correct declared facts that were observed to be inconsistent with the robot’s estimate of object properties.« less