skip to main content


Title: LSTM-Enabled Level Curve Tracking in Scalar Fields Using Multiple Mobile Robots
In this work, we investigate the problem of level curve tracking in unknown scalar fields using a limited number of mobile robots. We design and implement a long short-term memory (LSTM) enabled control strategy for a mobile sensor network to detect and track desired level curves. Based on the existing work of cooperative Kalman filter, we design an LSTM-enhanced Kalman filter that utilizes the sensor measurements and a sequence of past fields and gradients to estimate the current field value and gradient. We also design an LSTM model to estimate the Hessian of the field. The LSTM-enabled strategy has some benefits such as it can be trained offline on a collection of level curves in known fields prior to deployment, where the trained model will enable the mobile sensor network to track level curves in unknown fields for various applications. Another benefit is that we can train using larger resources to get more accurate models while utilizing a limited number of resources when the mobile sensor network is deployed in production. Simulation results show that this LSTM-enabled control strategy successfully tracks the level curve using a mobile multi-robot sensor network.

 
more » « less
Award ID(s):
1917300
NSF-PAR ID:
10350943
Author(s) / Creator(s):
;
Date Published:
Journal Name:
ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
Volume:
7
Page Range / eLocation ID:
DETC2021-68554, V007T07A051
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Recursive neural networks can be trained to serve as a memory for robots to perform intelligent behaviors when localization is not available. This paper develops an approach to convert a spatial map, represented as a scalar field, into a trained memory represented by the long short-term memory (LSTM) neural network. The trained memory can be retrieved through sensor measurements collected by robots to achieve intelligent behaviors, such as tracking level curves in the map. Memory retrieval does not require robot locations. The retrieved information is combined with sensor measurements through a Kalman filter enabled by the LSTM (LSTM-KF). Furthermore, a level curve tracking control law is designed. Simulation results show that the LSTM-KF and the control law are effective to generate level curve tracking behaviors for single-robot and multi-robot teams. 
    more » « less
  2. This article presents an online parameter identification scheme for advection-diffusion processes using data collected by a mobile sensor network. The advection-diffusion equation is incorporated into the information dynamics associated with the trajectories of the mobile sensors. A constrained cooperative Kalman filter is developed to provide estimates of the field values and gradients along the trajectories of the mobile sensors so that the temporal variations in the field values can be estimated. This leads to a co-design scheme for state estimation and parameter identification for advection-diffusion processes that is different from comparable schemes using sensors installed at fixed spatial locations. Using state estimates from the constrained cooperative Kalman filter, a recursive least-square (RLS) algorithm is designed to estimate unknown model parameters of the advection-diffusion processes. Theoretical justifications are provided for the convergence of the proposed cooperative Kalman filter by deriving a set of sufficient conditions regarding the formation shape and the motion of the mobile sensor network. Simulation and experimental results show satisfactory performance and demonstrate the robustness of the algorithm under realistic uncertainties and disturbances. 
    more » « less
  3. In this paper, we present a Long Short-Term Memory (LSTM)-based Kalman Filter for data assimilation of a 2D spatio-temporally varying depth-averaged ocean flow field for underwater glider path planning. The data source to the filter combines both the Eulerian flow map with the Lagrangian mobile sensor data stream. The depth-averaged flow is modeled as two components, the tidal and the non-tidal flow component. The tidal flow is modeled with ADCIRC (Advanced Three-Dimensional Circulation Model), while the non-tidal flow field is modeled by a set of spatial basis functions and their time series coefficients. The spatial basis functions are the principal modes derived by performing EOF (Empirical Orthogonal Functions) analysis on the historical surface flow field measured by high frequency radar (HFR), and the temporal coefficients of the spatial basis function are modeled by an LSTM neural network. The Kalman Filter is performed to combine the dynamics derived from the LSTM network, and the observations from the glider flow estimation data. Simulation results demonstrate that the proposed data assimilation method can give flow field prediction of reasonable accuracy. 
    more » « less
  4. BACKGROUND Optical sensing devices measure the rich physical properties of an incident light beam, such as its power, polarization state, spectrum, and intensity distribution. Most conventional sensors, such as power meters, polarimeters, spectrometers, and cameras, are monofunctional and bulky. For example, classical Fourier-transform infrared spectrometers and polarimeters, which characterize the optical spectrum in the infrared and the polarization state of light, respectively, can occupy a considerable portion of an optical table. Over the past decade, the development of integrated sensing solutions by using miniaturized devices together with advanced machine-learning algorithms has accelerated rapidly, and optical sensing research has evolved into a highly interdisciplinary field that encompasses devices and materials engineering, condensed matter physics, and machine learning. To this end, future optical sensing technologies will benefit from innovations in device architecture, discoveries of new quantum materials, demonstrations of previously uncharacterized optical and optoelectronic phenomena, and rapid advances in the development of tailored machine-learning algorithms. ADVANCES Recently, a number of sensing and imaging demonstrations have emerged that differ substantially from conventional sensing schemes in the way that optical information is detected. A typical example is computational spectroscopy. In this new paradigm, a compact spectrometer first collectively captures the comprehensive spectral information of an incident light beam using multiple elements or a single element under different operational states and generates a high-dimensional photoresponse vector. An advanced algorithm then interprets the vector to achieve reconstruction of the spectrum. This scheme shifts the physical complexity of conventional grating- or interference-based spectrometers to computation. Moreover, many of the recent developments go well beyond optical spectroscopy, and we discuss them within a common framework, dubbed “geometric deep optical sensing.” The term “geometric” is intended to emphasize that in this sensing scheme, the physical properties of an unknown light beam and the corresponding photoresponses can be regarded as points in two respective high-dimensional vector spaces and that the sensing process can be considered to be a mapping from one vector space to the other. The mapping can be linear, nonlinear, or highly entangled; for the latter two cases, deep artificial neural networks represent a natural choice for the encoding and/or decoding processes, from which the term “deep” is derived. In addition to this classical geometric view, the quantum geometry of Bloch electrons in Hilbert space, such as Berry curvature and quantum metrics, is essential for the determination of the polarization-dependent photoresponses in some optical sensors. In this Review, we first present a general perspective of this sensing scheme from the viewpoint of information theory, in which the photoresponse measurement and the extraction of light properties are deemed as information-encoding and -decoding processes, respectively. We then discuss demonstrations in which a reconfigurable sensor (or an array thereof), enabled by device reconfigurability and the implementation of neural networks, can detect the power, polarization state, wavelength, and spatial features of an incident light beam. OUTLOOK As increasingly more computing resources become available, optical sensing is becoming more computational, with device reconfigurability playing a key role. On the one hand, advanced algorithms, including deep neural networks, will enable effective decoding of high-dimensional photoresponse vectors, which reduces the physical complexity of sensors. Therefore, it will be important to integrate memory cells near or within sensors to enable efficient processing and interpretation of a large amount of photoresponse data. On the other hand, analog computation based on neural networks can be performed with an array of reconfigurable devices, which enables direct multiplexing of sensing and computing functions. We anticipate that these two directions will become the engineering frontier of future deep sensing research. On the scientific frontier, exploring quantum geometric and topological properties of new quantum materials in both linear and nonlinear light-matter interactions will enrich the information-encoding pathways for deep optical sensing. In addition, deep sensing schemes will continue to benefit from the latest developments in machine learning. Future highly compact, multifunctional, reconfigurable, and intelligent sensors and imagers will find applications in medical imaging, environmental monitoring, infrared astronomy, and many other areas of our daily lives, especially in the mobile domain and the internet of things. Schematic of deep optical sensing. The n -dimensional unknown information ( w ) is encoded into an m -dimensional photoresponse vector ( x ) by a reconfigurable sensor (or an array thereof), from which w′ is reconstructed by a trained neural network ( n ′ = n and w′   ≈   w ). Alternatively, x may be directly deciphered to capture certain properties of w . Here, w , x , and w′ can be regarded as points in their respective high-dimensional vector spaces ℛ n , ℛ m , and ℛ n ′ . 
    more » « less
  5. In this paper, a distributed cooperative filtering strategy for state estimation has been developed for mobile sensor networks in a spatial–temporal varying field modeled by the advection–diffusion equation. Sensors are organized into distributed cells that resemble a mesh grid covering a spatial area, and estimation of the field value and gradient information at each cell center is obtained by running a constrained cooperative Kalman filter while incorporating the sensor measurements and information from neighboring cells. Within each cell, the finite volume method is applied to discretize and approximate the advection–diffusion equation. These approximations build the weakly coupled relationships between neighboring cells and define the constraints that the cooperative Kalman filters are subjected to. With the estimated information, a gradient-based formation control law has been developed that enables the sensor network to adjust formation size by utilizing the estimated gradient information. Convergence analysis has been conducted for both the distributed constrained cooperative Kalman filter and the formation control. Simulation results with a 9-cell 12-sensor network validate the proposed distributed filtering method and control law. 
    more » « less