skip to main content


Title: Where2Act: From Pixels to Actions for Articulated 3D Objects
One of the fundamental goals of visual perception is to allow agents to meaningfully interact with their environment. In this paper, we take a step towards that long-term goal – we extract highly localized actionable information related to elementary actions such as pushing or pulling for articulated objects with movable parts. For example, given a drawer, our network predicts that applying a pulling force on the handle opens the drawer. We propose, discuss, and evaluate novel network architectures that given image and depth data, predict the set of actions possible at each pixel, and the regions over articulated parts that are likely to move under the force. We propose a learning-from-interaction framework with an online data sampling strategy that allows us to train the network in simulation (SAPIEN) and generalizes across categories. Check the website for code and data release.  more » « less
Award ID(s):
1763268
NSF-PAR ID:
10381759
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2021 IEEE/CVF International Conference on Computer Vision (ICCV)
Page Range / eLocation ID:
6793 to 6803
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mobile manipulation tasks such as opening a door, pulling open a drawer, or lifting a toilet lid require constrained motion of the end-effector under environmental and task constraints. This, coupled with partial information in novel environments, makes it challenging to employ classical motion planning approaches at test time. Our key insight is to cast it as a learning problem to leverage past experience of solving similar planning problems to directly predict motion plans for mobile manipulation tasks in novel situations at test time. To enable this, we develop a simulator, ArtObjSim, that simulates articulated objects placed in real scenes. We then introduce SeqIK+θ0, a fast and flexible representation for motion plans. Finally, we learn models that use SeqIK+θ0 to quickly predict motion plans for articulating novel objects at test time. Experimental evaluation shows improved speed and accuracy at generating motion plans than pure search-based methods and pure learning methods. 
    more » « less
  2. null (Ed.)
    Many physical tasks such as pulling out a drawer or wiping a table can be modeled with geometric constraints. These geometric constraints are characterized by restrictions on kinematic trajectories and reaction wrenches (forces and moments) of objects under the influence of the constraint. This paper presents a method to infer geometric constraints involving unmodeled objects in human demonstrations using both kinematic and wrench measurements. Our approach takes a recording of a human demonstration and determines what constraints are present, when they occur, and their parameters (e.g. positions). By using both kinematic and wrench information, our methods are able to reliably identify a variety of constraint types, even if the constraints only exist for short durations within the demonstration. We present a systematic approach to fitting arbitrary scleronomic constraint models to kinematic and wrench measurements. Reaction forces are estimated from measurements by removing friction. Position, orientation, force, and moment error metrics are developed to provide systematic comparison between constraint models. By conducting a user study, we show that our methods can reliably identify constraints in realistic situations and confirm the value of including forces and moments in the model regression and selection process. 
    more » « less
  3. null (Ed.)
    As autonomous robots interact and navigate around real-world environments such as homes, it is useful to reliably identify and manipulate articulated objects, such as doors and cabinets. Many prior works in object articulation identification require manipulation of the object, either by the robot or a human. While recent works have addressed predicting articulation types from visual observations alone, they often assume prior knowledge of category-level kinematic motion models or sequence of observations where the articulated parts are moving according to their kinematic constraints. In this work, we propose FormNet, a neural network that identifies the articulation mechanisms between pairs of object parts from a single frame of an RGB-D image and segmentation masks. The network is trained on 100k synthetic images of 149 articulated objects from 6 categories. Synthetic images are rendered via a photorealistic simulator with domain randomization. Our proposed model predicts motion residual flows of object parts, and these flows are used to determine the articulation type and parameters. The network achieves an articulation type classification accuracy of 82.5% on novel object instances in trained categories. Experiments also show how this method enables generalization to novel categories and can be applied to real-world images without fine-tuning. 
    more » « less
  4. This paper presents mobility modes and control methods for the SnoWorm, a passively-articulated multi-segment autonomous wheeled vehicle concept for use in Earth’s polar regions. SnoWorm is based on FrostyBoy, a four-wheeled GPS guided rover built for autonomous surveys across ice sheets. Data collected from FrostyBoy were used to ground-truth a ROS/Gazebo model of vehicle-terrain interaction for simulations on snow surfaces. The first mobility mode, inchworm movement, uses active prismatic joints that link the SnoWorm’s segments, and allow them to push and pull one another. This pushing and pulling of individual segments can be coordinated to allow forward motion through terrain that would immobilize a single-segment vehicle. The second mobility mode utilizes fixed links between SnoWorm’s segments and uses the tension or compression measured in these links as a variable to control wheel speeds and achieve a targeted force distributions within the multi-segment vehicle. This ability to control force distribution can be used to distribute a towed load evenly across the entire SnoWorm. Alternatively, the proportion of the load carried by individual segments can be increased or decreased as needed based on each segment’s available drawbar pull or wheel slip. 
    more » « less
  5. We propose a new policy class, Composable Interaction Primitives (CIPs), specialized for learning sustained-contact manipulation skills like opening a drawer, pulling a lever, turning a wheel, or shifting gears. CIPs have two primary design goals: to minimize what must be learned by exploiting structure present in the world and the robot, and to support sequential composition by construction, so that learned skills can be used by a task-level planner. Using an ablation experiment in four simulated manipulation tasks, we show that the structure included in CIPs substantially improves the efficiency of motor skill learning. We then show that CIPs can be used for plan execution in a zero-shot fashion by sequencing learned skills.We validate our approach on real robot hardware by learning and sequencing two manipulation skills. 
    more » « less