- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0001000003000000
- More
- Availability
-
31
- Author / Contributor
- Filter by Author / Creator
-
-
Qi, Haozhi (4)
-
Ma, Yi (3)
-
Wright, John (3)
-
You, Chong (3)
-
Yu, Yaodong (3)
-
Chan, Kwan Ho (2)
-
Calandra, Roberto (1)
-
Chan, Kwan_Ho_Ryan (1)
-
Fan, Taosha (1)
-
Kaess, Michael (1)
-
Kalakrishnan, Mrinal (1)
-
Lambeta, Mike (1)
-
Malik, Jitendra (1)
-
Mukadam, Mustafa (1)
-
Ortiz, Joseph (1)
-
Pineda, Luis (1)
-
Suresh, Sudharshan (1)
-
Wu, Tingfan (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
- Filter by Editor
-
-
Yashinski, Melisa (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Yashinski, Melisa (Ed.)To achieve human-level dexterity, robots must infer spatial awareness from multimodal sensing to reason over contact interactions. During in-hand manipulation of novel objects, such spatial awareness involves estimating the object’s pose and shape. The status quo for in-hand perception primarily uses vision and is restricted to tracking a priori known objects. Moreover, visual occlusion of objects in hand is imminent during manipulation, preventing current systems from pushing beyond tasks without occlusion. We combined vision and touch sensing on a multifingered hand to estimate an object’s pose and shape during in-hand manipulation. Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem. We studied multimodal in-hand perception in simulation and the real world, interacting with different objects via a proprioception-driven policy. Our experiments showed final reconstructionFscores of 81% and average pose drifts of 4.7 millimeters, which was further reduced to 2.3 millimeters with known object models. In addition, we observed that, under heavy visual occlusion, we could achieve improvements in tracking up to 94% compared with vision-only methods. Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation. We release our evaluation dataset of 70 experiments, FeelSight, as a step toward benchmarking in this domain. Our neural representation driven by multimodal sensing can serve as a perception backbone toward advancing robot dexterity.more » « lessFree, publicly-accessible full text available November 13, 2025
-
Chan, Kwan_Ho_Ryan; Yu, Yaodong; You, Chong; Qi, Haozhi; Wright, John; Ma, Yi (, Journal of machine learning research)
-
Chan, Kwan Ho; Yu, Yaodong; You, Chong; Qi, Haozhi; Wright, John; Ma, Yi (, Journal of machine learning research)
-
Chan, Kwan Ho; Yu, Yaodong; You, Chong; Qi, Haozhi; Wright, John; Ma, Yi (, NeurIPS Beyond Backpropagation Workshop)
An official website of the United States government

Full Text Available