skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Platyrrhine Primate Cebus imitator Uses Gaze to Manipulate and Withdraw Food to the Mouth
Orienting a food item held in the hand to withdraw and optimally place it in the mouth for eating (withdraw-to-eat) is mediated by vision in catarrhine anthropoids and by nonvisual strategies in strepsirrhine primates. The present study asks whether vision contributes to the withdraw-to-eat movements in a platyrrhine anthropoid Cebus imitator, a member of a monophyletic primate suborder whose stem group diverged from catarrhines about 40 million years ago. Cebus imitator’s gaze and hand use for foraging for fruit is examined in its fine branch niche, the terminal branches of trees. Video of reach, grasp and withdraw-to-eat movements with associated gaze were examined frame-by-frame to assess food manipulation and its sensory control. Cebus imitator uses vision and touch to reach for and grasp food items with precision and whole hand grasps. They use vision to orient food items held in-hand into a precision grip and their withdraw-to-eat is assisted with a vertically oriented hand. The conjoint use of vision, a precision grasp, and hand posture and a central representation of object control likely originated in stem anthropoids and was derived from the staged evolution of the visual manipulation of food and other objects.  more » « less
Award ID(s):
1945771 1945767 2316863 1944915
PAR ID:
10524554
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Animal Behavior and Cognition
Date Published:
Journal Name:
Animal Behavior and Cognition
Volume:
11
Issue:
1
ISSN:
2372-4323
Page Range / eLocation ID:
1 to 23
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper explores a novel approach to dexterous manipulation, aimed at levels of speed, precision, robustness, and simplicity suitable for practical deployment. The enabling technology is a Direct-drive Hand (DDHand) comprising two fingers, two DOFs each, that exhibit high speed and a light touch. The test application is the dexterous manipulation of three small and irregular parts, moving them to a grasp suitable for a subsequent assembly operation, regardless of initial presentation. We employed four primitive behaviors that use ground contact as a “third finger”, prior to or during the grasp process: pushing, pivoting, toppling, and squeeze- grasping. In our experiments, each part was presented from 30 to 90 times randomly positioned in each stable pose. Success rates varied from 83% to 100%. The time to manipulate and grasp was 6.32 seconds on average, varying from 2.07 to 16 seconds. In some cases, performance was robust, precise, and fast enough for practical applications, but in other cases, pose uncertainty required time-consuming vision and arm motions. The paper concludes with a discussion of further improvements required to make the primitives robust, eliminate uncertainty, and reduce this dependence on vision and arm motion. 
    more » « less
  2. We consider the problem of in-hand dexterous manipulation with a focus on unknown or uncertain hand–object parameters, such as hand configuration, object pose within hand, and contact positions. In particular, in this work we formulate a generic framework for hand–object configuration estimation using underactuated hands as an example. Owing to the passive reconfigurability and the lack of encoders in the hand’s joints, it is challenging to estimate, plan, and actively control underactuated manipulation. By modeling the grasp constraints, we present a particle filter-based framework to estimate the hand configuration. Specifically, given an arbitrary grasp, we start by sampling a set of hand configuration hypotheses and then randomly manipulate the object within the hand. While observing the object’s movements as evidence using an external camera, which is not necessarily calibrated with the hand frame, our estimator calculates the likelihood of each hypothesis to iteratively estimate the hand configuration. Once converged, the estimator is used to track the hand configuration in real time for future manipulations. Thereafter, we develop an algorithm to precisely plan and control the underactuated manipulation to move the grasped object to desired poses. In contrast to most other dexterous manipulation approaches, our framework does not require any tactile sensing or joint encoders, and can directly operate on any novel objects, without requiring a model of the object a priori. We implemented our framework on both the Yale Model O hand and the Yale T42 hand. The results show that the estimation is accurate for different objects, and that the framework can be easily adapted across different underactuated hand models. In the end, we evaluated our planning and control algorithm with handwriting tasks, and demonstrated the effectiveness of the proposed framework. 
    more » « less
  3. This paper explores the problem of autonomous, in-hand regrasping-the problem of moving from an initial grasp on an object to a desired grasp using the dexterity of a robot's fingers. We propose a planner for this problem which alternates between finger gaiting, and in-grasp manipulation. Finger gaiting enables the robot to move a single finger to a new contact location on the object, while the remaining fingers stably hold the object. In-grasp manipulation moves the object to a new pose relative to the robot's palm, while maintaining the contact locations between the hand and object. Given the object's geometry (as a mesh), the hand's kinematic structure, and the initial and desired grasps, we plan a sequence of finger gaits and object reposing actions to reach the desired grasp without dropping the object. We propose an optimization based approach and report in-hand regrasping plans for 5 objects over 5 in-hand regrasp goals each. The plans generated by our planner are collision free and guarantee kinematic feasibility. 
    more » « less
  4. Control of reach-to-grasp movements for deft and robust interactions with objects requires rapid sensorimotor updating that enables online adjustments to changing external goals (e.g., perturbations or instability of objects we interact with). Rarely do we appreciate the remarkable coordination in reach-to-grasp, until control becomes impaired by neurological injuries such as stroke, neurodegenerative diseases, or even aging. Modeling online control of human reach-to-grasp movements is a challenging problem but fundamental to several domains, including behavioral and computational neuroscience, neurorehabilitation, neural prostheses, and robotics. Currently, there are no publicly available datasets that include online adjustment of reach-to-grasp movements to object perturbations. This work aims to advance modeling efforts of reach-to-grasp movements by making publicly available a large kinematic and EMG dataset of online adjustment of reach-to-grasp movements to instantaneous perturbations of object size and distance performed in immersive haptic-free virtual environment (hf-VE). The presented dataset is composed of a large number of perturbation types (10 for both object size and distance) applied at three different latencies after the start of the movement. 
    more » « less
  5. Eye gaze is an important source of information for animals, implicated in communication, cooperation, hunting and antipredator behaviour. Gaze perception and its cognitive underpinnings are much studied in primates, but the specific features that are used to estimate gaze can be difficult to isolate behaviourally. We photographed 13 laboratory-housed tufted capuchin monkeys ( Sapajus [Cebus] apella ) to quantify chromatic and achromatic contrasts between their iris, pupil, sclera and skin. We used colour vision models to quantify the degree to which capuchin eye gaze is discriminable to capuchins, their predators and their prey. We found that capuchins, regardless of their colour vision phenotype, as well as their predators, were capable of effectively discriminating capuchin gaze across ecologically relevant distances. Their prey, in contrast, were not capable of discriminating capuchin gaze, even under relatively ideal conditions. These results suggest that specific features of primate eyes can influence gaze perception, both within and across species. 
    more » « less