In this work, we investigate the problem of level curve tracking in unknown scalar fields using a limited number of mobile robots. We design and implement a long short-term memory (LSTM) enabled control strategy for a mobile sensor network to detect and track desired level curves. Based on the existing work of cooperative Kalman filter, we design an LSTM-enhanced Kalman filter that utilizes the sensor measurements and a sequence of past fields and gradients to estimate the current field value and gradient. We also design an LSTM model to estimate the Hessian of the field. The LSTM-enabled strategy has some benefits such as it can be trained offline on a collection of level curves in known fields prior to deployment, where the trained model will enable the mobile sensor network to track level curves in unknown fields for various applications. Another benefit is that we can train using larger resources to get more accurate models while utilizing a limited number of resources when the mobile sensor network is deployed in production. Simulation results show that this LSTM-enabled control strategy successfully tracks the level curve using a mobile multi-robot sensor network.
more »
« less
Level Curve Tracking without Localization Enabled by Recurrent Neural Networks
Recursive neural networks can be trained to serve as a memory for robots to perform intelligent behaviors when localization is not available. This paper develops an approach to convert a spatial map, represented as a scalar field, into a trained memory represented by the long short-term memory (LSTM) neural network. The trained memory can be retrieved through sensor measurements collected by robots to achieve intelligent behaviors, such as tracking level curves in the map. Memory retrieval does not require robot locations. The retrieved information is combined with sensor measurements through a Kalman filter enabled by the LSTM (LSTM-KF). Furthermore, a level curve tracking control law is designed. Simulation results show that the LSTM-KF and the control law are effective to generate level curve tracking behaviors for single-robot and multi-robot teams.
more »
« less
- PAR ID:
- 10212086
- Date Published:
- Journal Name:
- 2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE)
- Page Range / eLocation ID:
- 759 to 763
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
An immense volume of data is produced by sensor devices in the fields of aquaponics, hydroponics, and soil-based food production, where these devices track various environmental factors. Data stream mining is the method of retrieving data from fast-sampled data sources that are constantly streaming. The accuracy of data obtained through data stream mining is largely determined by the algorithm utilized to filter out noise. For threshold-based automation, an actuator can be activated when the value of sensor data is above a permissible threshold. Noise from sensors may activate the actuator. Several statistical and machine learning-based noise-suppression algorithms have been proposed in the literature. They have been evaluated based on the mean squared error metric (MSE). The Long Short-Term Memory – LSTM filter (MSE: 0.000999943) performs better noise suppression than other traditional filters – Kalman (MSE: 0.0015982). We propose a new noise suppression filter – LSTM combined with Kalman (LSTM-KF). In LSTM-KF, the Kalman filter acts as an encoder and the LSTM becomes the decoder, resulting in a significantly lower MSE – 0.000080789592. The LSTM-KF is installed in our threshold-based aquaponics automation to maximize sustainable food production at minimum cost.more » « less
-
Advancements in robotics and AI have increased the demand for interactive robots in healthcare and assistive applications. However, ensuring safe and effective physical human-robot interactions (pHRIs) remains challenging due to the complexities of human motor communication and intent recognition. Traditional physics-based models struggle to capture the dynamic nature of human force interactions, limiting robotic adaptability. To address these limitations, neural networks (NNs) have been explored for force-movement intention prediction. While multi-layer perceptron (MLP) networks show potential, they struggle with temporal dependencies and generalization. Long Short-Term Memory (LSTM) networks effectively model sequential dependencies, while Convolutional Neural Networks (CNNs) enhance spatial feature extraction from human force data. Building on these strengths, this study introduces a hybrid LSTM-CNN framework to improve force-movement intention prediction, increasing accuracy from 69% to 86% through effective denoising and advanced architectures. The combined CNN-LSTM network proved particularly effective in handling individualized force-velocity relationships and presents a generalizable model paving the way for more adaptive strategies in robot guidance. These findings highlight the importance of integrating spatial and temporal modeling to enhance robot precision, responsiveness, and human-robot collaboration. Index Terms —- Physical Human-Robot Interaction, Intention Detection, Machine Learning, Long-Short Term Memory (LSTM)more » « less
-
Multi-robot cooperative control has been extensively studied using model-based distributed control methods. However, such control methods rely on sensing and perception modules in a sequential pipeline design, and the separation of perception and controls may cause processing latencies and compounding errors that affect control performance. End-to-end learning overcomes this limitation by implementing direct learning from onboard sensing data, with control commands output to the robots. Challenges exist in end-to-end learning for multi-robot cooperative control, and previous results are not scalable. We propose in this article a novel decentralized cooperative control method for multi-robot formations using deep neural networks, in which inter-robot communication is modeled by a graph neural network (GNN). Our method takes LiDAR sensor data as input, and the control policy is learned from demonstrations that are provided by an expert controller for decentralized formation control. Although it is trained with a fixed number of robots, the learned control policy is scalable. Evaluation in a robot simulator demonstrates the triangular formation behavior of multi-robot teams of different sizes under the learned control policy.more » « less
-
Creating soft robots with sophisticated, autonomous capabilities requires these systems to possess reliable, on-line proprioception of 3D configuration through integrated soft sensors. We present a framework for predicting a soft robot’s 3D configuration via deep learning using feedback from a soft, proprioceptive sensor skin. Our framework introduces a kirigami-enabled strategy for rapidly sensorizing soft robots using off-the-shelf materials, a general kinematic description for soft robot geometry, and an investigation of neural network designs for predicting soft robot configuration. Even with hysteretic, non-monotonic feedback from the piezoresistive sensors, recurrent neural networks show potential for predicting our new kinematic parameters and, thus, the robot’s configuration. One trained neural network closely predicts steady-state configuration during operation, though complete dynamic behavior is not fully captured. We validate our methods on a trunk-like arm with 12 discrete actuators and 12 proprioceptive sensors. As an essential advance in soft robotic perception, we anticipate our framework will open new avenues towards closed loop control in soft robotics.more » « less
An official website of the United States government

