skip to main content

Title: Supporting Perception of Weight through Motion-induced Sensory Conflicts in Robot Teleoperation
In this paper, we design and evaluate a novel form of visually-simulated haptic feedback cue for communicating weight in robot teleoperation. We propose that a visuo-proprioceptive cue results from inconsistencies created between the user's visual and proprioceptive senses when the robot's movement differs from the movement of the user's input. In a user study where participants teleoperate a six-DoF robot arm, we demonstrate the feasibility of using such a cue for communicating weight in four telemanipulation tasks to enhance user experience and task performance.
Authors:
; ; ;
Award ID(s):
1830242
Publication Date:
NSF-PAR ID:
10172191
Journal Name:
HRI '20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction
Page Range or eLocation-ID:
509 to 517
Sponsoring Org:
National Science Foundation
More Like this
  1. Proprioception or body awareness is an essential sense that aids in the neural control of movement. Proprioceptive impairments are commonly found in people with neurological conditions such as stroke and Parkinson’s disease. Such impairments are known to impact the patient’s quality of life. Robot-aided proprioceptive training has been proposed and tested to improve sensorimotor performance. However, such robot-aided exercises are implemented similar to many physical rehabilitation exercises, requiring task-specific and repetitive movements from patients. Monotonous nature of such repetitive exercises can result in reduced patient motivation, thereby, impacting treatment adherence and therapy gains. Gamification of exercises can make physical rehabilitationmore »more engaging and rewarding. In this work, we discuss our ongoing efforts to develop a game that can accompany a robot-aided wrist proprioceptive training exercise.« less
  2. Abstract Understanding the human motor control strategy during physical interaction tasks is crucial for developing future robots for physical human–robot interaction (pHRI). In physical human–human interaction (pHHI), small interaction forces are known to convey their intent between the partners for effective motor communication. The aim of this work is to investigate what affects the human’s sensitivity to the externally applied interaction forces. The hypothesis is that one way the small interaction forces are sensed is through the movement of the arm and the resulting proprioceptive signals. A pHRI setup was used to provide small interaction forces to the hand ofmore »seated participants in one of four directions, while the participants were asked to identify the direction of the push while blindfolded. The result shows that participants’ ability to correctly report the direction of the interaction force was lower with low interaction force as well as with high muscle contraction. The sensitivity to the interaction force direction increased with the radial displacement of the participant’s hand from the initial position: the further they moved the more correct their responses were. It was also observed that the estimated stiffness of the arm varies with the level of muscle contraction and robot interaction force.« less
  3. For people with visual impairments, photography is essential in identifying objects through remote sighted help and image recognition apps. This is especially the case for teachable object recognizers, where recognition models are trained on user's photos. Here, we propose real-time feedback for communicating the location of an object of interest in the camera frame. Our audio-haptic feedback is powered by a deep learning model that estimates the object center location based on its proximity to the user's hand. To evaluate our approach, we conducted a user study in the lab, where participants with visual impairments (N=9) used our feedback tomore »train and test their object recognizer in vanilla and cluttered environments. We found that very few photos did not include the object (2% in the vanilla and 8% in the cluttered) and the recognition performance was promising even for participants with no prior camera experience. Participants tended to trust the feedback even though they know it can be wrong. Our cluster analysis indicates that better feedback is associated with photos that include the entire object. Our results provide insights into factors that can degrade feedback and recognition performance in teachable interfaces.« less
  4. Smart mobile devices have become an integral part of people's life and users often input sensitive information on these devices. However, various side channel attacks against mobile devices pose a plethora of serious threats against user security and privacy. To mitigate these attacks, we present a novel secure Back-of-Device (BoD) input system, SecTap, for mobile devices. To use SecTap, a user tilts her mobile device to move a cursor on the keyboard and tap the back of the device to secretly input data. We design a tap detection method by processing the stream of accelerometer readings to identify the user'smore »taps in real time. The orientation sensor of the mobile device is used to control the direction and the speed of cursor movement. We also propose an obfuscation technique to randomly and effectively accelerate the cursor movement. This technique not only preserves the input performance but also keeps the adversary from inferring the tapped keys. Extensive empirical experiments were conducted on different smart phones to demonstrate the usability and security on both Android and iOS platforms.« less
  5. Research on robotic wheelchairs covers a broad range from complete autonomy to shared autonomy to manual navigation by a joystick or other means. Shared autonomy is valuable because it allows the user and the robot to complement each other, to correct each other's mistakes and to avoid collisions. In this paper, we present an approach that can learn to replicate path selection according to the wheelchair user's individual, often subjective, criteria in order to reduce the number of times the user has to intervene during automatic navigation. This is achieved by learning to rank paths using a support vector machinemore »trained on selections made by the user in a simulator. If the classifier's confidence in the top ranked path is high, it is executed without requesting confirmation from the user. Otherwise, the choice is deferred to the user. Simulations and laboratory experiments using two path generation strategies demonstrate the effectiveness of our approach.« less