skip to main content


Title: Fabrication, Modeling, and Control of Plush Robots
Abstract — We present a class of tendon-actuated soft robots, which promise to be low-cost and accessible to non-experts. The fabrication techniques we introduce are largely based on traditional techniques for fabricating plush toys, and so we term the robots created using our approach “plush robots.” A plush robot moves by driving internal winches that pull in (or let out) tendons routed through its skin. We provide a forward simulation model for predicting a plush robot’s deformation behavior given some contractions of its internal winches. We also leverage this forward model for use in an interactive control scheme, in which the user provides a target pose for the robot, and optimal contractions of the robot’s winches are automatically computed in real-time. We fabricate two examples to demonstrate the use of our system, and also discuss the design challenges inherent to plush robots.  more » « less
Award ID(s):
1637853
NSF-PAR ID:
10039433
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the International Conference on Intelligent Robots and Systems
ISSN:
2153-0866
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Contemporary approaches to perception, planning, estimation, and control have allowed robots to operate robustly as our remote surrogates in uncertain, unstructured environments. This progress now creates an opportunity for robots to operate not only in isolation, but also with and alongside humans in our complex environments. Realizing this opportunity requires an efficient and flexible medium through which humans can communicate with collaborative robots. Natural language provides one such medium, and through significant progress in statistical methods for natural-language understanding, robots are now able to interpret a diverse array of free-form navigation, manipulation, and mobile-manipulation commands. However, most contemporary approaches require a detailed, prior spatial-semantic map of the robot’s environment that models the space of possible referents of an utterance. Consequently, these methods fail when robots are deployed in new, previously unknown, or partially-observed environments, particularly when mental models of the environment differ between the human operator and the robot. This paper provides a comprehensive description of a novel learning framework that allows field and service robots to interpret and correctly execute natural-language instructions in a priori unknown, unstructured environments. Integral to our approach is its use of language as a “sensor”—inferring spatial, topological, and semantic information implicit in natural-language utterances and then exploiting this information to learn a distribution over a latent environment model. We incorporate this distribution in a probabilistic, language grounding model and infer a distribution over a symbolic representation of the robot’s action space, consistent with the utterance. We use imitation learning to identify a belief-space policy that reasons over the environment and behavior distributions. We evaluate our framework through a variety of different navigation and mobile-manipulation experiments involving an unmanned ground vehicle, a robotic wheelchair, and a mobile manipulator, demonstrating that the algorithm can follow natural-language instructions without prior knowledge of the environment. 
    more » « less
  2. Ground robots require the crucial capability of traversing unstructured and unprepared terrains and avoiding obstacles to complete tasks in real-world robotics applications such as disaster response. When a robot operates in off-road field environments such as forests, the robot’s actual behaviors often do not match its expected or planned behaviors, due to changes in the characteristics of terrains and the robot itself. Therefore, the capability of robot adaptation for consistent behavior generation is essential for maneuverability on unstructured off-road terrains. In order to address the challenge, we propose a novel method of self-reflective terrain-aware adaptation for ground robots to generate consistent controls to navigate over unstructured off-road terrains, which enables robots to more accurately execute the expected behaviors through robot self-reflection while adapting to varying unstructured terrains. To evaluate our method’s performance, we conduct extensive experiments using real ground robots with various functionality changes over diverse unstructured off-road terrains. The comprehensive experimental results have shown that our self-reflective terrain-aware adaptation method enables ground robots to generate consistent navigational behaviors and outperforms the compared previous and baseline techniques.

     
    more » « less
  3. The goal of this article is to enable robots to perform robust task execution following human instructions in partially observable environments. A robot’s ability to interpret and execute commands is fundamentally tied to its semantic world knowledge. Commonly, robots use exteroceptive sensors, such as cameras or LiDAR, to detect entities in the workspace and infer their visual properties and spatial relationships. However, semantic world properties are often visually imperceptible. We posit the use of non-exteroceptive modalities including physical proprioception, factual descriptions, and domain knowledge as mechanisms for inferring semantic properties of objects. We introduce a probabilistic model that fuses linguistic knowledge with visual and haptic observations into a cumulative belief over latent world attributes to infer the meaning of instructions and execute the instructed tasks in a manner robust to erroneous, noisy, or contradictory evidence. In addition, we provide a method that allows the robot to communicate knowledge dissonance back to the human as a means of correcting errors in the operator’s world model. Finally, we propose an efficient framework that anticipates possible linguistic interactions and infers the associated groundings for the current world state, thereby bootstrapping both language understanding and generation. We present experiments on manipulators for tasks that require inference over partially observed semantic properties, and evaluate our framework’s ability to exploit expressed information and knowledge bases to facilitate convergence, and generate statements to correct declared facts that were observed to be inconsistent with the robot’s estimate of object properties. 
    more » « less
  4. Contact planning is crucial to the locomotion performance of robots: to properly self-propel forward, it is not only important to determine the sequence of internal shape changes (e.g., body bending and limb shoulder joint oscillation) but also the sequence by which contact is made and broken between the mechanism and its environment. Prior work observed that properly coupling contact patterns and shape changes allows for computationally tractable gait design and efficient gait performance. The state of the art, however, made assumptions, albeit motivated by biological observation, as to how contact and shape changes can be coupled. In this paper, we extend the geometric mechanics (GM) framework to design contact patterns. Specifically, we introduce the concept of “contact space” to the GM framework. By establishing the connection between velocities in shape and position spaces, we can estimate the benefits of each contact pattern change and therefore optimize the sequence of contact patterns. In doing so, we can also analyze how a contact pattern sequence will respond to perturbations. We apply our framework to sidewinding robots and enable (1) effective locomotion direction control and (2) robust locomotion performance as the spatial resolution decreases. We also apply our framework to a hexapod robot with two back-bending joints and show that we can simplify existing hexapod gaits by properly reducing the number of contact state switches (during a gait cycle) without significant loss of locomotion speed. We test our designed gaits with robophysical experiments, and we obtain good agreement between theory and experiments. 
    more » « less
  5. null (Ed.)
    Soft, tip-extending, pneumatic “vine robots” that grow via eversion are well suited for navigating cluttered environments. Two key mechanisms that add to the robot’s functionality are a tip-mounted retraction device that allows the growth process to be reversed, and a tip-mounted camera that enables vision. However, previous designs used rigid, relatively heavy electromechanical retraction devices and external camera mounts, which reduce some advantages of these robots. These designs prevent the robot from squeezing through tight gaps, make it challenging to lift the robot tip against gravity, and require the robot to drag components against the environment. To address these limitations, we present a soft, pneumatically driven retraction device and an internal camera mount that are both lightweight and smaller than the diameter of the robot. The retraction device is composed of a soft, extending pneumatic actuator and a pair of soft clamping actuators that work together in an inch-worming motion. The camera mount sits inside the robot body and is kept at the tip of the robot by two low-friction interlocking components. We present characterizations of our retraction device and demonstrations that the robot can grow and retract through turns, tight gaps, and sticky environments while transmitting live video from the tip. Our designs advance the ability of everting vine robots to navigate difficult terrain while collecting data. 
    more » « less