skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Parts beget parts: Bootstrapping hierarchical object representations through visual statistical learning
Award ID(s):
1655300
PAR ID:
10281092
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Cognition
Volume:
209
Issue:
C
ISSN:
0010-0277
Page Range / eLocation ID:
104515
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In this paper, we describe a comprehensive approach to pricing analytics for reusable resources in the context of rotable spare parts, which are parts that can be repeatedly repaired and resold. Working in collaboration with a major aircraft manufacturer, we aim to instill a new pricing culture and develop a scalable new pricing methodology. Pricing rotable spare parts presents unique challenges ranging from complex inventory dynamics and minimal demand information to limited data availability. We develop a novel pricing analytics approach that tackles all of these challenges and that can be applied across all rotable spare parts. We then describe a large-scale implementation of our approach with our industrial partner, which led to an improvement in profits of over 3.9% over a 10-month period. 
    more » « less
  2. null (Ed.)
    As autonomous robots interact and navigate around real-world environments such as homes, it is useful to reliably identify and manipulate articulated objects, such as doors and cabinets. Many prior works in object articulation identification require manipulation of the object, either by the robot or a human. While recent works have addressed predicting articulation types from visual observations alone, they often assume prior knowledge of category-level kinematic motion models or sequence of observations where the articulated parts are moving according to their kinematic constraints. In this work, we propose FormNet, a neural network that identifies the articulation mechanisms between pairs of object parts from a single frame of an RGB-D image and segmentation masks. The network is trained on 100k synthetic images of 149 articulated objects from 6 categories. Synthetic images are rendered via a photorealistic simulator with domain randomization. Our proposed model predicts motion residual flows of object parts, and these flows are used to determine the articulation type and parameters. The network achieves an articulation type classification accuracy of 82.5% on novel object instances in trained categories. Experiments also show how this method enables generalization to novel categories and can be applied to real-world images without fine-tuning. 
    more » « less