skip to main content


Title: Iterative Learning-Based Path Optimization for Repetitive Path Planning, With Application to 3-D Crosswind Flight of Airborne Wind Energy Systems
This paper presents an iterative learning approach for optimizing course geometry in repetitive path following applications. In particular, we focus on airborne wind energy (AWE) systems. Our proposed algorithm consists of two key features: First, a recursive least squares fit is used to construct an estimate of the behavior of the performance index. Second, an iteration-to-iteration path adaptation law is used to adjust the path shape in the direction of optimal performance. We propose two candidate update laws, both of which parallel the mathematical structure of common iterative learning control (ILC) update laws but replace the tracking-dependent terms with terms based on the performance index.We apply our formulation to the iterative crosswind path optimization of an AWE system, where the goal is to maximize the average power output over a figure-8 path. Using a physics based AWE system model, we demonstrate that the proposed adaptation strategy successfully achieves convergence to near-optimal figure-8 paths for a variety of initial conditions under both constant and real wind profiles.  more » « less
Award ID(s):
1913735
NSF-PAR ID:
10112421
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE Transactions on Control Systems Technology
ISSN:
1063-6536
Page Range / eLocation ID:
1 to 13
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Real-time altitude control of airborne wind energy (AWE) systems can improve performance by allowing turbines to track favorable wind speeds across a range of operating altitudes. The current work explores the performance implications of deploying an AWE system with sensor configurations that provide different amounts of data to characterize wind speed profiles. We examine various control objectives that balance trade-offs between exploration and exploitation, and use a persistence model to generate a probabilistic wind speed forecast to inform control decisions. We assess system performance by comparing power production against baselines such as omniscient control and stationary flight. We show that with few sensors, control strategies that reward exploration are favored. We also show that with comprehensive sensing, the implications of choosing a sub-optimal control strategy decrease. This work informs and motivates the need for future research exploring online learning algorithms to characterize vertical wind speed profiles. 
    more » « less
  2. Summary

    Path planning is a fundamental and critical task in many robotic applications. For energy‐constrained robot platforms, path planning solutions are desired with minimum time arrivals and minimal energy consumption. Uncertain environments, such as wind conditions, pose challenges to the design of effective minimum time‐energy path planning solutions. In this article, we develop a minimum time‐energy path planning solution in continuous state and control input spaces using integral reinforcement learning (IRL). To provide a baseline solution for the performance evaluation of the proposed solution, we first develop a theoretical analysis for the minimum time‐energy path planning problem in a known environment using the Pontryagin's minimum principle. We then provide an online adaptive solution in an unknown environment using IRL. This is done through transforming the minimum time‐energy problem to an approximate minimum time‐energy problem and then developing an IRL‐based optimal control strategy. Convergence of the IRL‐based optimal control strategy is proven. Simulation studies are developed to compare the theoretical analysis and the proposed IRL‐based algorithm.

     
    more » « less
  3. Integer programming (IP) has proven to be highly effective in solving many path-based optimization problems in robotics. However, the applications of IP are generally done in an ad-hoc, problem-specific manner. In this work, after examined a wide range of path-based optimization problems, we describe an IP solution methodology for these problems that is both easy to apply (in two simple steps) and high-performance in terms of the computation time and the achieved optimal- ity. We demonstrate the generality of our approach through the application to three challenging path-based optimization problems: multi-robot path planning (MPP), minimum constraint removal (MCR), and reward collection problems (RCPs). Associ- ated experiments show that the approach can efficiently produce (near-)optimal solutions for problems with large state spaces, complex constraints, and complicated objective functions. In conjunction with the proposition of the IP methodology, we introduce two new and practical robotics problems: multi-robot minimum constraint removal (MMCR) and multi-robot path planning (MPP) with partial solutions, which can be quickly and effectively solved using our proposed IP solution pipeline. 
    more » « less
  4. Alessandro Astolfi (Ed.)
    This article studies the adaptive optimal stationary control of continuous-time linear stochastic systems with both additive and multiplicative noises, using reinforcement learning techniques. Based on policy iteration, a novel off-policy reinforcement learning algorithm, named optimistic least-squares-based policy iteration, is proposed, which is able to find iteratively near-optimal policies of the adaptive optimal stationary control problem directly from input/state data without explicitly identifying any system matrices, starting from an initial admissible control policy. The solutions given by the proposed optimistic least-squares-based policy iteration are proved to converge to a small neighborhood of the optimal solution with probability one, under mild conditions. The application of the proposed algorithm to a triple inverted pendulum example validates its feasibility and effectiveness. 
    more » « less
  5. Matni, Nikolai ; Morari, Manfred ; Pappas, George J. (Ed.)
    Controller tuning is a vital step to ensure a controller delivers its designed performance. DiffTune has been proposed as an automatic tuning method that unrolls the dynamical system and controller into a computational graph and uses auto-differentiation to obtain the gradient for the controller’s parameter update. However, DiffTune uses the vanilla gradient descent to iteratively update the parameter, in which the performance largely depends on the choice of the learning rate (as a hyperparameter). In this paper, we propose to use hyperparameter-free methods to update the controller parameters. We find the optimal parameter update by maximizing the loss reduction, where a predicted loss based on the approximated state and control is used for the maximization. Two methods are proposed to optimally update the parameters and are compared with related variants in simulations on a Dubin’s car and a quadrotor. Simulation experiments show that the proposed first-order method outperforms the hyperparameter-based methods and is more robust than the second-order hyperparameter-free methods. 
    more » « less