The process of modeling a series of hand-object parameters is crucial for precise and controllable robotic in-hand manipulation because it enables the mapping from the hand’s actuation input to the object’s motion to be obtained. Without assuming that most of these model parameters are known a priori or can be easily estimated by sensors, we focus on equipping robots with the ability to actively self-identify necessary model parameters using minimal sensing. Here, we derive algorithms, on the basis of the concept of virtual linkage-based representations (VLRs), to self-identify the underlying mechanics of hand-object systems via exploratory manipulation actions and probabilistic reasoning and, in turn, show that the self-identified VLR can enable the control of precise in-hand manipulation. To validate our framework, we instantiated the proposed system on a Yale Model O hand without joint encoders or tactile sensors. The passive adaptability of the underactuated hand greatly facilitates the self-identification process, because they naturally secure stable hand-object interactions during random exploration. Relying solely on an in-hand camera, our system can effectively self-identify the VLRs, even when some fingers are replaced with novel designs. In addition, we show in-hand manipulation applications of handwriting, marble maze playing, and cup stacking to demonstrate the effectiveness of the VLR in precise in-hand manipulation control.
more »
« less
Spatial Manipulation in Virtual Peripersonal Space: A Study of Motor Strategies
Abstract This article studies fine motor strategies for precise spatial manipulation in close-to-body interactions. Our innate ability for precise work is the result of the confluence of visuo-tactile perception, proprioception, and bi-manual motor control. Contrary to this, most mixed-reality (MR) systems are designed for interactions at arms length. To develop guidelines for precise manipulations in MR systems, there is a need for a systematic study of motor strategies including physical indexing, bi-manual coordination, and the relationship between visual and tactile feedback. To address this need, we present a series of experiments using three variations of a tablet-based MR interface using a close-range motion capture system and motion-tracked shape proxies. We investigate an elaborate version of the classic peg-and-hole task that our results strongly suggests the critical need for high precision tracking to enable precise manipulation.
more »
« less
- Award ID(s):
- 2008800
- PAR ID:
- 10339253
- Date Published:
- Journal Name:
- Journal of Computing and Information Science in Engineering
- Volume:
- 23
- Issue:
- 2
- ISSN:
- 1530-9827
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Automating operations of objects has made life easier and more convenient for billions of people, especially those with limited motor capabilities. On the other hand, even able-bodied users might not always be able to perform manual operations (e.g., both hands are occupied), and manual operations might be undesirable for hygiene purposes (e.g., contactless devices). As a result, automation systems like motion-triggered doors, remote-control window shades, contactless toilet lids have become increasingly popular in private and public environments. Yet, these systems are hampered by complex building wiring or short battery lifetimes, negating their positive benefits for accessibility, energy saving, healthcare, and other domains. In this paper we explore how these types of objects can be powered in perpetuity by the energy generated from a unique energy source - user interactions, specifically, the manual manipulations of objects by users who can afford them when they can afford them. Our assumption is that users' capabilities for object operations are heterogeneous, there are desires for both manual and automatic operations in most environments, and that automatic operations are often not needed as frequently - for example, an automatic door in a public space is often manually opened many times before a need for automatic operation shows up. The energy harvested by those manual operations would be sufficient to power that one automatic operation. We instantiate this idea by upcycling common everyday objects with devices which have various mechanical designs powered by a general-purpose backbone embedded system. We call these devices, MiniKers. We built a custom driver circuit that can enable motor mechanisms to toggle between generating powers (i.e., manual operation) and actuating objects (i.e., automatic operation). We designed a wide variety of mechanical mechanisms to retrofit existing objects and evaluated our system with a 48-hour deployment study, which proves the efficacy of MiniKers as well as shedding light into this people-as-power approach as a feasible solution to address energy needed for smart environment automation.more » « less
-
Adults aged 65 years and older are the fastest growing age group worldwide. Future autonomous vehicles may help to support the mobility of older individuals; however, these cars will not be widely available for several decades and current semi-autonomous vehicles often require manual takeover in unusual driving conditions. In these situations, the vehicle issues a takeover request in any uni-, bi- or trimodal combination of visual, auditory, or tactile alerts to signify the need for manual intervention. However, to date, it is not clear whether age-related differences exist in the perceived ease of detecting these alerts. Also, the extent to which engagement in non-driving-related tasks affects this perception in younger and older drivers is not known. Therefore, the goal of this study was to examine the effects of age on the ease of perceiving takeover requests in different sensory channels and on attention allocation during conditional driving automation. Twenty-four younger and 24 older adults drove a simulated SAE Level 3 vehicle under three conditions: baseline, while performing a non-driving-related task, and while engaged in a driving-related task, and were asked to rate the ease of detecting uni-, bi- or trimodal combinations of visual, auditory, or tactile signals. Both age groups found the trimodal alert to be the easiest to detect. Also, older adults focused more on the road than the secondary task compared to younger drivers. Findings may inform the development of next-generation of autonomous vehicle systems to be safe for a wide range of age groups.more » « less
-
Motor impairments caused by stroke significantly affect daily activities and reduce quality of life, highlighting the need for effective rehabilitation strategies. This study presents a novel approach to classifying motor tasks using EEG data from acute stroke patients, focusing on left-hand motor imagery, right-hand motor imagery, and rest states. By using advanced source localization techniques, such as Minimum Norm Estimation (MNE), dipole fitting, and beamforming, integrated with a customized Residual Convolutional Neural Network (ResNetCNN) architecture, we achieved superior spatial pattern recognition in EEG data. Our approach yielded classification accuracies of 91.03% with dipole fitting, 89.07% with MNE, and 87.17% with beamforming, markedly surpassing the 55.57% to 72.21% range of traditional sensor domain methods. These results highlight the efficacy of transitioning from sensor to source domain in capturing precise brain activity. The enhanced accuracy and reliability of our method hold significant potential for advancing brain–computer interfaces (BCIs) in neurorehabilitation. This study emphasizes the importance of using advanced EEG classification techniques to provide clinicians with precise tools for developing individualized therapy plans, potentially leading to substantial improvements in motor function recovery and overall patient outcomes. Future work will focus on integrating these techniques into practical BCI systems and assessing their long-term impact on stroke rehabilitation.more » « less
-
Designing effective rehabilitation strategies for upper extremities, particularly hands and fingers, warrants the need for a computational model of human motor learning. The presence of large degrees of freedom (DoFs) available in these systems makes it difficult to balance the trade-off between learning the full dexterity and accomplishing manipulation goals. The motor learning literature argues that humans use motor synergies to reduce the dimension of control space. Using the low-dimensional space spanned by these synergies, we develop a computational model based on the internal model theory of motor control. We analyze the proposed model in terms of its convergence properties and fit it to the data collected from human experiments. We compare the performance of the fitted model to the experimental data and show that it captures human motor learning behavior well.more » « less
An official website of the United States government

