skip to main content


Title: Hand pose selection in a bimanual fine-manipulation task
Many daily tasks involve the collaboration of both hands. Humans dexterously adjust hand poses and modulate the forces exerted by fingers in response to task demands. Hand pose selection has been intensively studied in unimanual tasks, but little work has investigated bimanual tasks. This work examines hand poses selection in a bimanual high-precision-screwing task taken from watchmaking. Twenty right-handed subjects dismounted a screw on the watch face with a screwdriver in two conditions. Results showed that although subjects used similar hand poses across steps within the same experimental conditions, the hand poses differed significantly in the two conditions. In the free-base condition, subjects needed to stabilize the watch face on the table. The role distribution across hands was strongly influenced by hand dominance: the dominant hand manipulated the tool, whereas the nondominant hand controlled the additional degrees of freedom that might impair performance. In contrast, in the fixed-base condition, the watch face was stationary. Subjects used both hands even though single hand would have been sufficient. Importantly, hand poses decoupled the control of task-demanded force and torque across hands through virtual fingers that grouped multiple fingers into functional units. This preference for bimanual over unimanual control strategy could be an effort to reduce variability caused by mechanical couplings and to alleviate intrinsic sensorimotor processing burdens. To afford analysis of this variety of observations, a novel graphical matrix-based representation of the distribution of hand pose combinations was developed. Atypical hand poses that are not documented in extant hand taxonomies are also included. NEW & NOTEWORTHY We study hand poses selection in bimanual fine motor skills. To understand how roles and control variables are distributed across the hands and fingers, we compared two conditions when unscrewing a screw from a watch face. When the watch face needed positioning, role distribution was strongly influenced by hand dominance; when the watch face was stationary, a variety of hand pose combinations emerged. Control of independent task demands is distributed either across hands or across distinct groups of fingers.  more » « less
Award ID(s):
1825942
NSF-PAR ID:
10293815
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Neurophysiology
Volume:
126
Issue:
1
ISSN:
0022-3077
Page Range / eLocation ID:
195 to 212
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The extent to which hand dominance may influence how each agent contributes to inter-personal coordination remains unknown. In the present study, right-handed human participants performed object balancing tasks either in dyadic conditions with each agent using one hand (left or right), or in bimanual conditions where each agent performed the task individually with both hands. We found that object load was shared between two hands more asymmetrically in dyadic than single-agent conditions. However, hand dominance did not influence how two hands shared the object load. In contrast, hand dominance was a major factor in modulating hand vertical movement speed. Furthermore, the magnitude of internal force produced by two hands against each other correlated with the synchrony between the two hands’ movement in dyads. This finding supports the important role of internal force in haptic communication. Importantly, both internal force and movement synchrony were affected by hand dominance of the paired participants. Overall, these results demonstrate, for the first time, that pairing of one dominant and one non-dominant hand may promote asymmetrical roles within a dyad during joint physical interactions. This appears to enable the agent using the dominant hand to actively maintain effective haptic communication and task performance. 
    more » « less
  2. null (Ed.)
    An important problem in designing human-robot systems is the integration of human intent and performance in the robotic control loop, especially in complex tasks. Bimanual coordination is a complex human behavior that is critical in many fine motor tasks, including robot-assisted surgery. To fully leverage the capabilities of the robot as an intelligent and assistive agent, online recognition of bimanual coordination could be important. Robotic assistance for a suturing task, for example, will be fundamentally different during phases when the suture is wrapped around the instrument (i.e., making a c- loop), than when the ends of the suture are pulled apart. In this study, we develop an online recognition method of bimanual coordination modes (i.e., the directions and symmetries of right and left hand movements) using geometric descriptors of hand motion. We (1) develop this framework based on ideal trajectories obtained during virtual 2D bimanual path following tasks performed by human subjects operating Geomagic Touch haptic devices, (2) test the offline recognition accuracy of bi- manual direction and symmetry from human subject movement trials, and (3) evalaute how the framework can be used to characterize 3D trajectories of the da Vinci Surgical System’s surgeon-side manipulators during bimanual surgical training tasks. In the human subject trials, our geometric bimanual movement classification accuracy was 92.3% for movement direction (i.e., hands moving together, parallel, or away) and 86.0% for symmetry (e.g., mirror or point symmetry). We also show that this approach can be used for online classification of different bimanual coordination modes during needle transfer, making a C loop, and suture pulling gestures on the da Vinci system, with results matching the expected modes. Finally, we discuss how these online estimates are sensitive to task environment factors and surgeon expertise, and thus inspire future work that could leverage adaptive control strategies to enhance user skill during robot-assisted surgery. 
    more » « less
  3. Many manipulation tasks, such as placement or within-hand manipulation, require the object’s pose relative to a robot hand. The task is difficult when the hand significantly occludes the object. It is especially hard for adaptive hands, for which it is not easy to detect the finger’s configuration. In addition, RGB-only approaches face issues with texture-less objects or when the hand and the object look similar. This paper presents a depth-based framework, which aims for robust pose estimation and short response times. The approach detects the adaptive hand’s state via efficient parallel search given the highest overlap between the hand’s model and the point cloud. The hand’s point cloud is pruned and robust global registration is performed to generate object pose hypotheses, which are clustered. False hypotheses are pruned via physical reasoning. The remaining poses’ quality is evaluated given agreement with observed data. Extensive evaluation on synthetic and real data demonstrates the accuracy and computational efficiency of the framework when applied on challenging, highly-occluded scenarios for different object types. An ablation study identifies how the framework’s components help in performance. This work also provides a dataset for in-hand 6D object pose esti- mation. Code and dataset are available at: https://github. com/wenbowen123/icra20-hand-object-pose 
    more » « less
  4. Parkinson’s disease (PD) is a neurological progressive movement disorder, affecting more than 10 million people globally. PD demands a longitudinal assessment of symptoms to monitor the disease progression and manage the treatments. Existing assessment methods require patients with PD (PwPD) to visit a clinic every 3–6 months to perform movement assessments conducted by trained clinicians. However, periodic visits pose barriers as PwPDs have limited mobility, and healthcare cost increases. Hence, there is a strong demand for using telemedicine technologies for assessing PwPDs in remote settings. In this work, we present an in-home telemedicine kit, named iTex (intelligent Textile), which is a patient-centered design to carry out accessible tele-assessments of movement symptoms in people with PD. iTex is composed of a pair of smart textile gloves connected to a customized embedded tablet. iTex gloves are integrated with flex sensors on the fingers and inertial measurement unit (IMU) and have an onboard microcontroller unit with IoT (Internet of Things) capabilities including data storage and wireless communication. The gloves acquire the sensor data wirelessly to monitor various hand movements such as finger tapping, hand opening and closing, and other movement tasks. The gloves are connected to a customized tablet computer acting as an IoT device, configured to host a wireless access point, and host an MQTT broker and a time-series database server. The tablet also employs a patient-centered interface to guide PwPDs through the movement exam protocol. The system was deployed in four PwPDs who used iTex at home independently for a week. They performed the test independently before and after medication intake. Later, we performed data analysis of the in-home study and created a feature set. The study findings reported that the iTex gloves were capable to collect movement-related data and distinguish between pre-medication and post-medication cases in a majority of the participants. The IoT infrastructure demonstrated robust performance in home settings and offered minimum barriers for the assessment exams and the data communication with a remote server. In the post-study survey, all four participants expressed that the system was easy to use and poses a minimum barrier to performing the test independently. The present findings indicate that the iTex glove system has the potential for periodic and objective assessment of PD motor symptoms in remote settings. 
    more » « less
  5. Objective: Robust neural decoding of intended motor output is crucial to enable intuitive control of assistive devices, such as robotic hands, to perform daily tasks. Few existing neural decoders can predict kinetic and kinematic variables simultaneously. The current study developed a continuous neural decoding approach that can concurrently predict fingertip forces and joint angles of multiple fingers. Methods: We obtained motoneuron firing activities by decomposing high-density electromyogram (HD EMG) signals of the extrinsic finger muscles. The identified motoneurons were first grouped and then refined specific to each finger (index or middle) and task (finger force and dynamic movement) combination. The refined motoneuron groups (separate matrix) were then applied directly to new EMG data in real-time involving both finger force and dynamic movement tasks produced by both fingers. EMG-amplitude-based prediction was also performed as a comparison. Results: We found that the newly developed decoding approach outperformed the EMG-amplitude method for both finger force and joint angle estimations with a lower prediction error (Force: 3.47±0.43 vs 6.64±0.69% MVC, Joint Angle: 5.40±0.50° vs 12.8±0.65°) and a higher correlation (Force: 0.75±0.02 vs 0.66±0.05, Joint Angle: 0.94±0.01 vs 0.5±0.05) between the estimated and recorded motor output. The performance was also consistent for both fingers. Conclusion: The developed neural decoding algorithm allowed us to accurately and concurrently predict finger forces and joint angles of multiple fingers in real-time. Significance: Our approach can enable intuitive interactions with assistive robotic hands, and allow the performance of dexterous hand skills involving both force control tasks and dynamic movement control tasks. 
    more » « less