Soft robots have recently drawn extensive attention thanks to their unique ability of adapting to complicated environments. Soft robots are designed in a variety of shapes of aiming for many different applications. However, accurate modelling and control of soft robots is still an open problem due to the complex robot structure and uncertain interaction with the environment. In fact, there is no unified framework for the modeling and control of generic soft robots. In this paper, we present a novel data-driven machine learning method for modeling a cable-driven soft robot. This machine learning algorithm, named deterministic learning (DL), uses soft robot motion data to train a radial basis function neural network (RBFNN). The soft robot motion dynamics are then guaranteed to be accurately identified, represented, and stored as an RBFNN model with converged constant neural network weights. To validate our method, We have built a simulated soft robot almost identical to our real inchworm soft robot, and we have tested the DL algorithm in simulation. Furthermore, a neural network weight combining technique is used which can extract and combine useful dynamics information from multiple robot motion trajectories.
more »
« less
Drosophibot: a fruit fly inspired bio-robot
We introduce Drosophibot, a hexapod robot with legs designed based on the Common fruit fly, Drosophila melanogaster, built as a test platform for neural control development. The robot models anatomical aspects not present in other, similar bio-robots such as a retractable abdominal segment, insect-like dynamic scaling, and compliant feet segments in the hopes that more similar biomechanics will lead to more similar neural control and resulting behaviors. In increasing biomechanical modeling accuracy, we aim to gain further insight into the insect’s nervous system to inform the current model and subsequent neural controllers for legged robots.
more »
« less
- Award ID(s):
- 1704436
- PAR ID:
- 10119320
- Date Published:
- Journal Name:
- 8th International Conference, Living Machines 2019
- Page Range / eLocation ID:
- 146-157
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Multi-robot cooperative control has been extensively studied using model-based distributed control methods. However, such control methods rely on sensing and perception modules in a sequential pipeline design, and the separation of perception and controls may cause processing latencies and compounding errors that affect control performance. End-to-end learning overcomes this limitation by implementing direct learning from onboard sensing data, with control commands output to the robots. Challenges exist in end-to-end learning for multi-robot cooperative control, and previous results are not scalable. We propose in this article a novel decentralized cooperative control method for multi-robot formations using deep neural networks, in which inter-robot communication is modeled by a graph neural network (GNN). Our method takes LiDAR sensor data as input, and the control policy is learned from demonstrations that are provided by an expert controller for decentralized formation control. Although it is trained with a fixed number of robots, the learned control policy is scalable. Evaluation in a robot simulator demonstrates the triangular formation behavior of multi-robot teams of different sizes under the learned control policy.more » « less
-
In this paper, we present a decentralized control approach based on a Nonlinear Model Predictive Control (NMPC) method that employs barrier certificates for safe navigation of multiple nonholonomic wheeled mobile robots in unknown environments with static and/or dynamic obstacles. This method incorporates a Learned Barrier Function (LBF) into the NMPC design in order to guarantee safe robot navigation, i.e., prevent robot collisions with other robots and the obstacles. We refer to our proposed control approach as NMPC-LBF. Since each robot does not have a priori knowledge about the obstacles and other robots, we use a Deep Neural Network (DeepNN) running in real-time on each robot to learn the Barrier Function (BF) only from the robot's LiDAR and odometry measurements. The DeepNN is trained to learn the BF that separates safe and unsafe regions. We implemented our proposed method on simulated and actual Turtlebot3 Burger robot(s) in different scenarios. The implementation results show the effectiveness of the NMPC-LBF method at ensuring safe navigation of the robots.more » « less
-
null (Ed.)Recursive neural networks can be trained to serve as a memory for robots to perform intelligent behaviors when localization is not available. This paper develops an approach to convert a spatial map, represented as a scalar field, into a trained memory represented by the long short-term memory (LSTM) neural network. The trained memory can be retrieved through sensor measurements collected by robots to achieve intelligent behaviors, such as tracking level curves in the map. Memory retrieval does not require robot locations. The retrieved information is combined with sensor measurements through a Kalman filter enabled by the LSTM (LSTM-KF). Furthermore, a level curve tracking control law is designed. Simulation results show that the LSTM-KF and the control law are effective to generate level curve tracking behaviors for single-robot and multi-robot teams.more » « less
-
Robotic search often involves teleoperating vehicles into unknown environments. In such scenarios, prior knowledge of target location or environmental map may be a viable resource to tap into and control other autonomous robots in the vicinity towards an improved search performance. In this paper, we test the hypothesis that despite having the same skill, prior knowledge of target or environment affects teleoperator actions, and such knowledge can therefore be inferred through robot movement. To investigate whether prior knowledge can improve human-robot team performance, we next evaluate an adaptive mutual-information blending strategy that admits a time-dependent weighting for steering autonomous robots. Human-subject experiments show that several features including distance travelled by the teleoperated robot, time spent staying still, speed, and turn rate, all depend on the level of prior knowledge and that absence of prior knowledge increased workload. Building on these results, we identified distance travelled and time spent staying still as movement cues that can be used to robustly infer prior knowledge. Simulations where an autonomous robot accompanied a human teleoperated robot revealed that whereas time to find the target was similar across all information-based search strategies, adaptive strategies that acted on movement cues found the target sooner more often than a single human teleoperator compared to non-adaptive strategies. This gain is diluted with number of robots, likely due to the limited size of the search environment. Results from this work set the stage for developing knowledge-aware control algorithms for autonomous robots in collaborative human-robot teams.more » « less
An official website of the United States government

