skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Agrawal, Pulkit"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In-hand object reorientation is necessary for performing many dexterous manipulation tasks, such as tool use in less structured environments, which remain beyond the reach of current robots. Prior works built reorientation systems assuming one or many of the following conditions: reorienting only specific objects with simple shapes, limited range of reorientation, slow or quasi-static manipulation, simulation-only results, the need for specialized and costly sensor suites, and other constraints that make the system infeasible for real-world deployment. We present a general object reorientation controller that does not make these assumptions. It uses readings from a single commodity depth camera to dynamically reorient complex and new object shapes by any rotation in real time, with the median reorientation time being close to 7 seconds. The controller was trained using reinforcement learning in simulation and evaluated in the real world on new object shapes not used for training, including the most challenging scenario of reorienting objects held in the air by a downward-facing hand that must counteract gravity during reorientation. Our hardware platform only used open-source components that cost less than 5000 dollars. Although we demonstrate the ability to overcome assumptions in prior work, there is ample scope for improving absolute performance. For instance, the challenging duck-shaped object not used for training was dropped in 56% of the trials. When it was not dropped, our controller reoriented the object within 0.4 radians (23°) 75% of the time. 
    more » « less
  2. null (Ed.)
    Current model-based reinforcement learning methods struggle when operating from complex visual scenes due to their inability to prioritize task-relevant features. To mitigate this prob- lem, we propose learning Task Informed Ab- stractions (TIA) that explicitly separates reward- correlated visual features from distractors. For learning TIA, we introduce the formalism of Task Informed MDP (TiMDP) that is realized by train- ing two models that learn visual features via coop- erative reconstruction, but one model is adversari- ally dissociated from the reward signal. Empirical evaluation shows that TIA leads to significant per- formance gains over state-of-the-art methods on many visual control tasks where natural and un- constrained visual distractions pose a formidable challenge. Project page: https://xiangfu.co/tia 
    more » « less