skip to main content


Title: Recognizing Orientation Slip in Human Demonstrations
Manipulations of a constrained object often use a non-rigid grasp that allows the object to rotate relative to the end effector. This orientation slip strategy is often present in natural human demonstrations, yet it is generally overlooked in methods to identify constraints from such demonstrations. In this paper, we present a method to model and recognize prehensile orientation slip in human demonstrations of constrained interactions. Using only observations of an end effector, we can detect the type of constraint, parameters of the constraint, and orientation slip properties. Our method uses a novel hierarchical model selection method that is informed by multiple origins of physics-based evidence. A study with eight participants shows that orientation slip occurs in natural demonstrations and confirms that it can be detected by our method.  more » « less
Award ID(s):
1830242
NSF-PAR ID:
10340190
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
IEEE International Conference on Robotics and Automation (ICRA)
Page Range / eLocation ID:
2790 to 2797
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Many physical tasks such as pulling out a drawer or wiping a table can be modeled with geometric constraints. These geometric constraints are characterized by restrictions on kinematic trajectories and reaction wrenches (forces and moments) of objects under the influence of the constraint. This paper presents a method to infer geometric constraints involving unmodeled objects in human demonstrations using both kinematic and wrench measurements. Our approach takes a recording of a human demonstration and determines what constraints are present, when they occur, and their parameters (e.g. positions). By using both kinematic and wrench information, our methods are able to reliably identify a variety of constraint types, even if the constraints only exist for short durations within the demonstration. We present a systematic approach to fitting arbitrary scleronomic constraint models to kinematic and wrench measurements. Reaction forces are estimated from measurements by removing friction. Position, orientation, force, and moment error metrics are developed to provide systematic comparison between constraint models. By conducting a user study, we show that our methods can reliably identify constraints in realistic situations and confirm the value of including forces and moments in the model regression and selection process. 
    more » « less
  2. Madden, John D. ; Anderson, Iain A. ; Shea, Herbert R. (Ed.)
    Ras Labs makes Synthetic Muscle™, which is a class of electroactive polymer (EAP) based materials and actuators that sense pressure (gentle touch to high impact), controllably contract and expand at low voltage (1.5 V to 50 V, including use of batteries), and attenuate force. We are in the robotics era, but robots do have their challenges. Currently, robotic sensing is mainly visual, which is useful up until the point of contact. To understand how an object is being gripped, tactile feedback is needed. For handling fragile objects, if the grip is too tight, breakage occurs, and if the grip is too loose, the object will slip out of the grasp, also leading to breakage. Rigid robotic grippers using a visual feedback loop can struggle to determine the exact point and quality of contact. Robotic grippers can also get a stuttering effect in the visual feedback loop. By using soft Synthetic Muscle™ based EAP pads as the sensors, immediate feedback was generated at the first point of contact. Because these pads provided a soft, compliant interface, the first point of contact did not apply excessive force, allowing the force applied to the object to be controlled. The EAP sensor could also detect a change in pressure location on its surface, making it possible to detect and prevent slippage by then adjusting the grip strength. In other words, directional glide provided feedback for the presence of possible slippage to then be able to control a slightly tighter grip, without stutter, due to both the feedback and the soft gentleness of the fingertip-like EAP pads themselves. The soft nature of the EAP fingertip pad also naturally held the gripped object, improving the gripping quality over rigid grippers without an increase in applied force. Analogous to finger-like tactile touch, the EAPs with appropriate coatings and electronics were positioned as pressure sensors in the fingertip or end effector regions of robotic grippers. This development of using Synthetic Muscle™ based EAPs as soft sensors provided for sensors that feel like the pads of human fingertips. Basic pressure position and magnitude tests have been successful, with pressure sensitivity down to 0.05 N. Most automation and robots are very strong, very fast, and usually need to be partitioned away from humans for safety reasons. For many repetitive tasks that humans do with delicate or fragile objects, it would be beneficial to use robotics; whether it is for agriculture, medical surgery, therapeutic or personal care, or in extreme environments where humans cannot enter, including with contagions that have no cure. Synthetic Muscle™ was also retrofitted as actuator systems into off-the-shelf robotic grippers and is being considered in novel biomimetic gripper designs, operating at low voltages (less than 50 V). This offers biomimetic movement by contracting like human muscles, but also exceeds natural biological capabilities by expanding under reversed electric polarity. Human grasp is gentle yet firm, with tactile touch feedback. In conjunction with shape-morphing abilities, these EAPs also are being explored to intrinsically sense pressure due to the correlation between mechanical force applied to the EAP and its electronic signature. The robotic field is experiencing phenomenal growth in this fourth phase of the industrial revolution, the robotics era. The combination of Ras Labs’ EAP shape-morphing and sensing features promises the potential for robotic grippers with human hand-like control and tactile sensing. This work is expected to advance both robotics and prosthetics, particularly for collaborative robotics to allow humans and robots to intuitively work safely and effectively together. 
    more » « less
  3. We present a closed-loop multi-arm motion planner that is scalable and flexible with team size. Traditional multi-arm robotic systems have relied on centralized motion planners, whose run times often scale exponentially with team size, and thus, fail to handle dynamic environments with open-loop control. In this paper, we tackle this problem with multi-agent reinforcement learning, where a shared policy network is trained to control each individual robot arm to reach its target end-effector pose given observations of its workspace state and target end-effector pose. The policy is trained using Soft Actor-Critic with expert demonstrations from a sampling-based motion planning algorithm (i.e., BiRRT). By leveraging classical planning algorithms, we can improve the learning efficiency of the reinforcement learning algorithm while retaining the fast inference time of neural networks. The resulting policy scales sub-linearly and can be deployed on multi-arm systems with variable team sizes. Thanks to the closed-loop and decentralized formulation, our approach generalizes to 5-10 multiarm systems and dynamic moving targets (>90% success rate for a 10-arm system), despite being trained on only 1-4 arm planning tasks with static targets. 
    more » « less
  4. Designing reward functions is a difficult task in AI and robotics. The complex task of directly specifying all the desirable behaviors a robot needs to optimize often proves challenging for humans. A popular solution is to learn reward functions using expert demonstrations. This approach, however, is fraught with many challenges. Some methods require heavily structured models, for example, reward functions that are linear in some predefined set of features, while others adopt less structured reward functions that may necessitate tremendous amounts of data. Moreover, it is difficult for humans to provide demonstrations on robots with high degrees of freedom, or even quantifying reward values for given trajectories. To address these challenges, we present a preference-based learning approach, where human feedback is in the form of comparisons between trajectories. We do not assume highly constrained structures on the reward function. Instead, we employ a Gaussian process to model the reward function and propose a mathematical formulation to actively fit the model using only human preferences. Our approach enables us to tackle both inflexibility and data-inefficiency problems within a preference-based learning framework. We further analyze our algorithm in comparison to several baselines on reward optimization, where the goal is to find the optimal robot trajectory in a data-efficient way instead of learning the reward function for every possible trajectory. Our results in three different simulation experiments and a user study show our approach can efficiently learn expressive reward functions for robotic tasks, and outperform the baselines in both reward learning and reward optimization.

     
    more » « less
  5. In order for robots to operate effectively in homes and workplaces, they must be able to manipulate the articulated objects common within environments built for and by humans. Kinematic models provide a concise representation of these objects that enable deliberate, generalizable manipulation policies. However, existing approaches to learning these models rely upon visual observations of an object’s motion, and are subject to the effects of occlusions and feature sparsity. Natural language descriptions provide a flexible and efficient means by which humans can provide complementary information in a weakly supervised manner suitable for a variety of different interactions (e.g., demonstrations and remote manipulation). In this paper, we present a multimodal learning framework that incorporates both vision and language information acquired in situ to estimate the structure and parameters that de- fine kinematic models of articulated objects. The visual signal takes the form of an RGB-D image stream that opportunistically captures object motion in an unprepared scene. Accompanying natural language descriptions of the motion constitute the linguistic signal. We model linguistic information using a probabilistic graphical model that grounds natural language descriptions to their referent kinematic motion. By exploiting the complementary nature of the vision and language observations, our method infers correct kinematic models for various multiple-part objects on which the previous state-of-the- art, visual-only system fails. We evaluate our multimodal learning framework on a dataset comprised of a variety of household objects, and demonstrate a 23% improvement in model accuracy over the vision-only baseline. 
    more » « less