skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on February 14, 2026

Title: Adaptive formation learning control for cooperative AUVs under complete uncertainty
IntroductionThis paper addresses the critical need for adaptive formation control in Autonomous Underwater Vehicles (AUVs) without requiring knowledge of system dynamics or environmental data. Current methods, often assuming partial knowledge like known mass matrices, limit adaptability in varied settings. MethodsWe proposed two-layer framework treats all system dynamics, including the mass matrix, as entirely unknown, achieving configuration-agnostic control applicable to multiple underwater scenarios. The first layer features a cooperative estimator for inter-agent communication independent of global data, while the second employs a decentralized deterministic learning (DDL) controller using local feedback for precise trajectory control. The framework's radial basis function neural networks (RBFNN) store dynamic information, eliminating the need for relearning after system restarts. ResultsThis robust approach addresses uncertainties from unknown parametric values and unmodeled interactions internally, as well as external disturbances such as varying water currents and pressures, enhancing adaptability across diverse environments. DiscussionComprehensive and rigorous mathematical proofs are provided to confirm the stability of the proposed controller, while simulation results validate each agentโ€™s control accuracy and signal boundedness, confirming the frameworkโ€™s stability and resilience in complex scenarios.  more » « less
Award ID(s):
2154901
PAR ID:
10626838
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Frontiers in Robotics and AI
Date Published:
Journal Name:
Frontiers in Robotics and AI
Volume:
11
ISSN:
2296-9144
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The integration of machine learning in power systems, particularly in stability and dynamics, addresses the challenges brought by the integration of renewable energies and distributed energy resources (DERs). Traditional methods for power system transient stability, involving solving differential equations with computational techniques, face limitations due to their time-consuming and computationally demanding nature. This paper introduces physics-informed Neural Networks (PINNs) as a promising solution for these challenges, especially in scenarios with limited data availability and the need for high computational speed. PINNs offer a novel approach for complex power systems by incorporating additional equations and adapting to various system scales, from a single bus to multi-bus networks. Our study presents the first comprehensive evaluation of physics-informed Neural Networks (PINNs) in the context of power system transient stability, addressing various grid complexities. Additionally, we introduce a novel approach for adjusting loss weights to improve the adaptability of PINNs to diverse systems. Our experimental findings reveal that PINNs can be efficiently scaled while maintaining high accuracy. Furthermore, these results suggest that PINNs significantly outperform the traditional ode45 method in terms of efficiency, especially as the system size increases, showcasing a progressive speed advantage over ode45. 
    more » « less
  2. The majority of the past research dealing with lane-changing controller design of autonomous vehicles (๐ด๐‘‰ s) is based on the assumption of full knowledge of the model dynamics of the ๐ด๐‘‰ and the surrounding vehicles. However, in the real world, this is not a very realistic assumption as accurate dynamic models are difficult to obtain. Also, the dynamic model parameters might change over time due to various factors. Thus, there is a need for a learning-based lane change controller design methodology that can learn the optimal control policy in real time using sensor data. In this paper, we have addressed this need by introducing an optimal learningbased control methodology that can solve the real-time lane-changing problem of ๐ด๐‘‰ s, where the input-state data of the ๐ด๐‘‰ is utilized to generate a near-optimal lane-changing controller by approximate/adaptive dynamic programming (ADP) technique. In the case of this type of complex lane-changing maneuver, the lateral dynamics depend on the longitudinal velocity of the vehicle. If the longitudinal velocity is assumed constant, a linear parameter invariant model can be used. However, assuming constant velocity while performing a lane-changing maneuver is not a realistic assumption. This assumption might increase the risk of accidents, especially in the case of lane abortion when the surrounding vehicles are not cooperative. Thus, in this paper, the dynamics of the ๐ด๐‘‰ are assumed to be a linear parameter-varying system. Thus we have two challenges for the lane-changing controller design: parameter-varying, and unknown dynamics. With the help of both gain scheduling and ADP techniques combined, a learning-based control algorithm that can generate a near-optimal lane-changing controller without having to know the accurate dynamic model of the ๐ด๐‘‰ is proposed. The inclusion of a gain scheduling approach with ADP makes the controller applicable to non-linear and/or parameter-varying ๐ด๐‘‰ dynamics. The stability of the learning-based gain scheduling controller has also been rigorously proved. Moreover, a data-driven lane-changing decision-making algorithm is introduced that can make the ๐ด๐‘‰ perform a lane abortion if safety conditions are violated during a lane change. Finally, the proposed learning-based gain scheduling controller design algorithm and the lane-changing decision-making methodology are numerically validated using MATLAB, SUMO simulations, and the NGSIM dataset. 
    more » « less
  3. Abstract Using the context of trajectory estimation and tracking for multirotor unmanned aerial vehicles (UAVs), we explore the challenges in applying high-gain observers to highly dynamic systems. The multirotor will operate in the presence of external disturbances and modeling errors. At the same time, the reference trajectory is unknown and generated from a reference system with unknown or partially known dynamics. We assume the only measurements that are available are the position and orientation of the multirotor and the position of the reference system. We adopt an extended high-gain observer (EHGO) estimation framework to estimate the unmeasured multirotor states, modeling errors, external disturbances, and the reference trajectory. We design a robust output feedback controller for trajectory tracking that comprises a feedback linearizing controller and the EHGO. The proposed control method is rigorously analyzed to establish its stability properties. Finally, we illustrate our theoretical results through numerical simulation and experimental validation in which a multirotor tracks a moving ground vehicle with an unknown trajectory and dynamics and successfully lands on the vehicle while in motion. 
    more » « less
  4. We present a simple model-free control algorithm that is able to robustly learn and stabilize an unknown discrete time linear system with full control and state feedback subject to arbitrary bounded disturbance and noise sequences. The controller does not require any prior knowledge of the system dynamics, disturbances or noise, yet can guarantee robust stability, uniform asymptotic bounds and uniform worst-case bounds on the state-deviation. Rather than the algorithm itself, we would like to highlight the new approach taken towards robust stability analysis which served as a key enabler in providing the presented stability and performance guarantees. We will conclude with simulation results that show that despite the generality and simplicity, the controller demonstrates good closed-loop performance. 
    more » « less
  5. It was shown, in recent work by the authors, that it is possible to learn an asymptotically stabilizing controller from a small number of demonstrations performed by an expert on a feedback linearizable system. These results rely on knowledge of the plant dynamics to assemble the learned controller from the demonstrations. In this paper we show how to leverage recent results on data-driven control to dispense with the need to use the plant model. By bringing these two methodologies โ€” learning from demonstrations and data-driven control โ€” together, this paper provides a technique that enables the control of unknown nonlinear feedback linearizable systems solely based on a small number of expert demonstrations. 
    more » « less