skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Spatial Manipulation in Virtual Peripersonal Space: A Study of Motor Strategies
Abstract This article studies fine motor strategies for precise spatial manipulation in close-to-body interactions. Our innate ability for precise work is the result of the confluence of visuo-tactile perception, proprioception, and bi-manual motor control. Contrary to this, most mixed-reality (MR) systems are designed for interactions at arms length. To develop guidelines for precise manipulations in MR systems, there is a need for a systematic study of motor strategies including physical indexing, bi-manual coordination, and the relationship between visual and tactile feedback. To address this need, we present a series of experiments using three variations of a tablet-based MR interface using a close-range motion capture system and motion-tracked shape proxies. We investigate an elaborate version of the classic peg-and-hole task that our results strongly suggests the critical need for high precision tracking to enable precise manipulation.  more » « less
Award ID(s):
2008800
PAR ID:
10339253
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Journal of Computing and Information Science in Engineering
Volume:
23
Issue:
2
ISSN:
1530-9827
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The process of modeling a series of hand-object parameters is crucial for precise and controllable robotic in-hand manipulation because it enables the mapping from the hand’s actuation input to the object’s motion to be obtained. Without assuming that most of these model parameters are known a priori or can be easily estimated by sensors, we focus on equipping robots with the ability to actively self-identify necessary model parameters using minimal sensing. Here, we derive algorithms, on the basis of the concept of virtual linkage-based representations (VLRs), to self-identify the underlying mechanics of hand-object systems via exploratory manipulation actions and probabilistic reasoning and, in turn, show that the self-identified VLR can enable the control of precise in-hand manipulation. To validate our framework, we instantiated the proposed system on a Yale Model O hand without joint encoders or tactile sensors. The passive adaptability of the underactuated hand greatly facilitates the self-identification process, because they naturally secure stable hand-object interactions during random exploration. Relying solely on an in-hand camera, our system can effectively self-identify the VLRs, even when some fingers are replaced with novel designs. In addition, we show in-hand manipulation applications of handwriting, marble maze playing, and cup stacking to demonstrate the effectiveness of the VLR in precise in-hand manipulation control. 
    more » « less
  2. Automating operations of objects has made life easier and more convenient for billions of people, especially those with limited motor capabilities. On the other hand, even able-bodied users might not always be able to perform manual operations (e.g., both hands are occupied), and manual operations might be undesirable for hygiene purposes (e.g., contactless devices). As a result, automation systems like motion-triggered doors, remote-control window shades, contactless toilet lids have become increasingly popular in private and public environments. Yet, these systems are hampered by complex building wiring or short battery lifetimes, negating their positive benefits for accessibility, energy saving, healthcare, and other domains. In this paper we explore how these types of objects can be powered in perpetuity by the energy generated from a unique energy source - user interactions, specifically, the manual manipulations of objects by users who can afford them when they can afford them. Our assumption is that users' capabilities for object operations are heterogeneous, there are desires for both manual and automatic operations in most environments, and that automatic operations are often not needed as frequently - for example, an automatic door in a public space is often manually opened many times before a need for automatic operation shows up. The energy harvested by those manual operations would be sufficient to power that one automatic operation. We instantiate this idea by upcycling common everyday objects with devices which have various mechanical designs powered by a general-purpose backbone embedded system. We call these devices, MiniKers. We built a custom driver circuit that can enable motor mechanisms to toggle between generating powers (i.e., manual operation) and actuating objects (i.e., automatic operation). We designed a wide variety of mechanical mechanisms to retrofit existing objects and evaluated our system with a 48-hour deployment study, which proves the efficacy of MiniKers as well as shedding light into this people-as-power approach as a feasible solution to address energy needed for smart environment automation. 
    more » « less
  3. Adults aged 65 years and older are the fastest growing age group worldwide. Future autonomous vehicles may help to support the mobility of older individuals; however, these cars will not be widely available for several decades and current semi-autonomous vehicles often require manual takeover in unusual driving conditions. In these situations, the vehicle issues a takeover request in any uni-, bi- or trimodal combination of visual, auditory, or tactile alerts to signify the need for manual intervention. However, to date, it is not clear whether age-related differences exist in the perceived ease of detecting these alerts. Also, the extent to which engagement in non-driving-related tasks affects this perception in younger and older drivers is not known. Therefore, the goal of this study was to examine the effects of age on the ease of perceiving takeover requests in different sensory channels and on attention allocation during conditional driving automation. Twenty-four younger and 24 older adults drove a simulated SAE Level 3 vehicle under three conditions: baseline, while performing a non-driving-related task, and while engaged in a driving-related task, and were asked to rate the ease of detecting uni-, bi- or trimodal combinations of visual, auditory, or tactile signals. Both age groups found the trimodal alert to be the easiest to detect. Also, older adults focused more on the road than the secondary task compared to younger drivers. Findings may inform the development of next-generation of autonomous vehicle systems to be safe for a wide range of age groups. 
    more » « less
  4. Designing effective rehabilitation strategies for upper extremities, particularly hands and fingers, warrants the need for a computational model of human motor learning. The presence of large degrees of freedom (DoFs) available in these systems makes it difficult to balance the trade-off between learning the full dexterity and accomplishing manipulation goals. The motor learning literature argues that humans use motor synergies to reduce the dimension of control space. Using the low-dimensional space spanned by these synergies, we develop a computational model based on the internal model theory of motor control. We analyze the proposed model in terms of its convergence properties and fit it to the data collected from human experiments. We compare the performance of the fitted model to the experimental data and show that it captures human motor learning behavior well. 
    more » « less
  5. Learning a robot motor skill from scratch is impractically slow; so much so that in practice, learning must typically be bootstrapped using human demonstration. However, relying on human demonstration necessarily degrades the autonomy of robots that must learn a wide variety of skills over their operational lifetimes. We propose using kinematic motion planning as a completely autonomous, sample efficient way to bootstrap motor skill learning for object manipulation. We demonstrate the use of motion planners to bootstrap motor skills in two complex object manipulation scenarios with different policy representations: opening a drawer with a dynamic movement primitive representation, and closing a microwave door with a deep neural network policy. We also show how our method can bootstrap a motor skill for the challenging dynamic task of learning to hit a ball off a tee, where a kinematic plan based on treating the scene as static is insufficient to solve the task, but sufficient to bootstrap a more dynamic policy. In all three cases, our method is competitive with human-demonstrated initialization, and significantly outperforms starting with a random policy. This approach enables robots to to efficiently and autonomously learn motor policies for dynamic tasks without human demonstration. 
    more » « less