skip to main content


Title: A kinematic and EMG dataset of online adjustment of reach-to-grasp movements to visual perturbations
Control of reach-to-grasp movements for deft and robust interactions with objects requires rapid sensorimotor updating that enables online adjustments to changing external goals (e.g., perturbations or instability of objects we interact with). Rarely do we appreciate the remarkable coordination in reach-to-grasp, until control becomes impaired by neurological injuries such as stroke, neurodegenerative diseases, or even aging. Modeling online control of human reach-to-grasp movements is a challenging problem but fundamental to several domains, including behavioral and computational neuroscience, neurorehabilitation, neural prostheses, and robotics. Currently, there are no publicly available datasets that include online adjustment of reach-to-grasp movements to object perturbations. This work aims to advance modeling efforts of reach-to-grasp movements by making publicly available a large kinematic and EMG dataset of online adjustment of reach-to-grasp movements to instantaneous perturbations of object size and distance performed in immersive haptic-free virtual environment (hf-VE). The presented dataset is composed of a large number of perturbation types (10 for both object size and distance) applied at three different latencies after the start of the movement.  more » « less
Award ID(s):
1804550 1935337
PAR ID:
10357670
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Scientific data
Volume:
9
Issue:
23
ISSN:
2052-4463
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Control of reach-to-grasp movements for deft and robust interactions with objects requires rapid sensorimotor updating that enables online adjustments to changing external goals (e.g., perturbations or instability of objects we interact with). Rarely do we appreciate the remarkable coordination in reach-to-grasp, until control becomes impaired by neurological injuries such as stroke, neurodegenerative diseases, or even aging. Modeling online control of human reach-to-grasp movements is a challenging problem but fundamental to several domains, including behavioral and computational neuroscience, neurorehabilitation, neural prostheses, and robotics. Currently, there are no publicly available datasets that include online adjustment of reach-to-grasp movements to object perturbations. This work aims to advance modeling efforts of reach-to-grasp movements by making publicly available a large kinematic and EMG dataset of online adjustment of reach-to-grasp movements to instantaneous perturbations of object size and distance performed in immersive haptic-free virtual environment (hf-VE). The presented dataset is composed of a large number of perturbation types (10 for both object size and distance) applied at three different latencies after the start of the movement.

     
    more » « less
  2. null (Ed.)
    Technological advancements and increased access have prompted the adoption of head- mounted display based virtual reality (VR) for neuroscientific research, manual skill training, and neurological rehabilitation. Applications that focus on manual interaction within the virtual environment (VE), especially haptic-free VR, critically depend on virtual hand-object collision detection. Knowledge about how multisensory integration related to hand-object collisions affects perception-action dynamics and reach-to-grasp coordination is needed to enhance the immersiveness of interactive VR. Here, we explored whether and to what extent sensory substitution for haptic feedback of hand-object collision (visual, audio, or audiovisual) and collider size (size of spherical pointers representing the fingertips) influences reach-to-grasp kinematics. In Study 1, visual, auditory, or combined feedback were compared as sensory substitutes to indicate the successful grasp of a virtual object during reach-to-grasp actions. In Study 2, participants reached to grasp virtual objects using spherical colliders of different diameters to test if virtual collider size impacts reach-to-grasp. Our data indicate that collider size but not sensory feedback modality significantly affected the kinematics of grasping. Larger colliders led to a smaller size-normalized peak aperture. We discuss this finding in the context of a possible influence of spherical collider size on the perception of the virtual object’s size and hence effects on motor planning of reach-to-grasp. Critically, reach-to-grasp spatiotemporal coordination patterns were robust to manipulations of sensory feedback modality and spherical collider size, suggesting that the nervous system adjusted the reach (transport) component commensurately to the changes in the grasp (aperture) component. These results have important implications for research, commercial, industrial, and clinical applications of VR. 
    more » « less
  3. Orienting a food item held in the hand to withdraw and optimally place it in the mouth for eating (withdraw-to-eat) is mediated by vision in catarrhine anthropoids and by nonvisual strategies in strepsirrhine primates. The present study asks whether vision contributes to the withdraw-to-eat movements in a platyrrhine anthropoid Cebus imitator, a member of a monophyletic primate suborder whose stem group diverged from catarrhines about 40 million years ago. Cebus imitator’s gaze and hand use for foraging for fruit is examined in its fine branch niche, the terminal branches of trees. Video of reach, grasp and withdraw-to-eat movements with associated gaze were examined frame-by-frame to assess food manipulation and its sensory control. Cebus imitator uses vision and touch to reach for and grasp food items with precision and whole hand grasps. They use vision to orient food items held in-hand into a precision grip and their withdraw-to-eat is assisted with a vertically oriented hand. The conjoint use of vision, a precision grasp, and hand posture and a central representation of object control likely originated in stem anthropoids and was derived from the staged evolution of the visual manipulation of food and other objects.

     
    more » « less
  4. This paper presents an online data collection method that captures human intuition about what grasp types are preferred for different fundamental object shapes and sizes. Survey questions are based on an adopted taxonomy that combines grasp pre-shape, approach, wrist orientation, object shape, orientation and size which covers a large swathe of common grasps. For example, the survey identifies at what object height or width dimension (normalized by robot hand size) the human prefers to use a two finger precision grasp versus a three-finger power grasp. This information is represented as a confidence-interval based polytope in the object shape space. The result is a database that can be used to quickly find potential pre-grasps that are likely to work, given an estimate of the object shape and size. 
    more » « less
  5. Hasegawa, Yasuhisa (Ed.)
    Advancing robotic grasping and manipulation requires the ability to test algorithms and/or train learning models on large numbers of grasps. Towards the goal of more advanced grasping, we present the Grasp Reset Mechanism (GRM), a fully automated apparatus for conducting large-scale grasping trials. The GRM automates the process of resetting a grasping environment, repeatably placing an object in a fixed location and controllable 1-D orientation. It also collects data and swaps between multiple objects enabling robust dataset collection with no human intervention. We also present a standardized state machine interface for control, which allows for integration of most manipulators with minimal effort. In addition to the physical design and corresponding software, we include a dataset of 1,020 grasps. The grasps were created with a Kinova Gen3 robot arm and Robotiq 2F-85 Adaptive Gripper to enable training of learning models and to demonstrate the capabilities of the GRM. The dataset includes ranges of grasps conducted across four objects and a variety of orientations. Manipulator states, object pose, video, and grasp success data are provided for every trial. 
    more » « less