skip to main content

Title: Distributed Proprioception of 3D Configuration in Soft, Sensorized Robots via Deep Learning
Creating soft robots with sophisticated, autonomous capabilities requires these systems to possess reliable, on-line proprioception of 3D configuration through integrated soft sensors. We present a framework for predicting a soft robot’s 3D configuration via deep learning using feedback from a soft, proprioceptive sensor skin. Our framework introduces a kirigami-enabled strategy for rapidly sensorizing soft robots using off-the-shelf materials, a general kinematic description for soft robot geometry, and an investigation of neural network designs for predicting soft robot configuration. Even with hysteretic, non-monotonic feedback from the piezoresistive sensors, recurrent neural networks show potential for predicting our new kinematic parameters and, thus, the robot’s configuration. One trained neural network closely predicts steady-state configuration during operation, though complete dynamic behavior is not fully captured. We validate our methods on a trunk-like arm with 12 discrete actuators and 12 proprioceptive sensors. As an essential advance in soft robotic perception, we anticipate our framework will open new avenues towards closed loop control in soft robotics.
Award ID(s):
Publication Date:
Journal Name:
IEEE robotics automation letters
Page Range or eLocation-ID:
Sponsoring Org:
National Science Foundation
More Like this
  1. Snake robotics is an important research topic with a wide range of applications, including inspection in confined spaces, search-and-rescue, and disaster response. Snake robots are well-suited to these applications because of their versatility and adaptability to unstructured and constrained environments. In this paper, we introduce a soft pneumatic robotic snake that can imitate the capabilities of biological snakes, its soft body can provide flexibility and adaptability to the environment. This paper combines soft mobile robot modeling, proprioceptive feedback control, and motion planning to pave the way for functional soft robotic snake autonomy. We propose a pressure-operated soft robotic snake with a high degree of modularity that makes use of customized embedded flexible curvature sensing. On this platform, we introduce the use of iterative learning control using feedback from the on-board curvature sensors to enable the snake to automatically correct its gait for superior locomotion. We also present a motion planning and trajectory tracking algorithm using an adaptive bounding box, which allows for efficient motion planning that still takes into account the kinematic state of the soft robotic snake. We test this algorithm experimentally, and demonstrate its performance in obstacle avoidance scenarios.
  2. The objective of this research is to evaluate vision-based pose estimation methods for on-site construction robots. The prospect of human-robot collaborative work on construction sites introduces new workplace hazards that must be mitigated to ensure safety. Human workers working on tasks alongside construction robots must perceive the interaction to be safe to ensure team identification and trust. Detecting the robot pose in real-time is thus a key requirement in order to inform the workers and to enable autonomous operation. Vision-based (marker-less, marker-based) and sensor-based (IMU, UWB) are two of the main methods for estimating robot pose. The marker-based and sensor-based methods require some additional preinstalled sensors or markers, whereas the marker-less method only requires an on-site camera system, which is common on modern construction sites. In this research, we develop a marker-less pose estimation system, which is based on a convolutional neural network (CNN) human pose estimation algorithm: stacked hourglass networks. The system is trained with image data collected from a factory setup environment and labels of excavator pose. We use a KUKA robot arm with a bucket mounted on the end-effector to represent a robotic excavator in our experiment. We evaluate the marker-less method and compare the result withmore »the robot’s ground truth pose. The preliminary results show that the marker-less method is capable of estimating the pose of the excavator based on a state-of-the-art human pose estimation algorithm.« less
  3. Unlike traditional robots, soft robots can intrinsi- cally interact with their environment in a continuous, robust, and safe manner. These abilities - and the new opportunities they open - motivate the development of algorithms that provide reliable information on the nature of environmental interactions and, thereby, enable soft robots to reason on and properly react to external contact events. However, directly extracting such information with integrated sensors remains an arduous task that is further complicated by also needing to sense the soft robot’s configuration. As an alternative to direct sensing, this paper addresses the challenge of estimating contact forces directly from the robot’s posture. We propose a new technique that merges a nominal disturbance observer, a model-based component, with corrections learned from data. The result is an algorithm that is accurate yet sample efficient, and one that can reliably estimate external contact events with the environment. We prove the convergence of our proposed method analytically, and we demonstrate its performance with simulations and physical experiments.
  4. This paper presents methods for placing length sensors on a soft continuum robot joint as well as a novel configuration estimation method that drastically minimizes configuration estimation error. The methods utilized for placing sensors along the length of the joint include a single joint length sensor, sensors lined end-to-end, sensors that overlap according to a heuristic, and sensors that are placed by an optimization that we describe in this paper. The methods of configuration estimation include directly relating sensor length to a segment of the joint's angle, using an equal weighting of overlapping sensors that cover a joint segment, and using a weighted linear combination of all sensors on the continuum joint. The weights for the linear combination method are determined using robust linear regression. Using a kinematic simulation we show that placing three or more overlapping sensors and estimating the configuration with a linear combination of sensors resulted in a median error of 0.026% of the max range of motion or less. This is over a 500 times improvement as compared to using a single sensor to estimate the joint configuration. This error was computed across 80 simulated robots of different lengths and ranges of motion. We also foundmore »that the fully optimized sensor placement performed only marginally better than the placement of sensors according to the heuristic. This suggests that the use of a linear combination of sensors, with weights found using linear regression is more important than the placement of the overlapping sensors. Further, using the heuristic significantly simplifies the application of these techniques when designing for hardware.« less
  5. Continuum robots have strong potential for application in Space environments. However, their modeling is challenging in comparison with traditional rigid-link robots. The Kinematic-Model-Free (KMF) robot control method has been shown to be extremely effective in permitting a rigid-link robot to learn approximations of local kinematics and dynamics (“kinodynamics”) at various points in the robot’s task space. These approximations enable the robot to follow various trajectories and even adapt to changes in the robot’s kinematic structure. In this paper, we present the adaptation of the KMF method to a three-section, nine degrees-of-freedom continuum manipulator for both planar and spatial task spaces. Using only an external 3D camera, we show that the KMF method allows the continuum robot to converge to various desired set points in the robot’s task space, avoiding the complexities inherent in solving this problem using traditional inverse kinematics. The success of the method shows that a continuum robot can “learn” enough information from an external camera to reach and track desired points and trajectories, without needing knowledge of exact shape or position of the robot. We similarly apply the method in a simulated example of a continuum robot performing an inspection task on board the ISS.