skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


Title: Constraint-Driven Optimal Control for Emergent Swarming and Predator Avoidance
In this article, we present a constraint-driven optimal control framework that achieves emergent cluster flocking within a constrained 2D environment. We formulate a decentralized optimal control problem that includes safety, flocking, and predator avoidance constraints. We explicitly derive conditions for constraint compatibility and propose an event-driven constraint relaxation scheme. We map this to an equivalent switching system that intuitively describes the behavior of each agent in the system. Instead of minimizing control effort, as it is common in the ecologically-inspired robotics literature, in our approach, we minimize each agent’s deviation from their most efficient locomotion speed. Finally, we demonstrate our approach in simulation both with and without the presence of a predator.  more » « less
Award ID(s):
2149520 2219761
NSF-PAR ID:
10421261
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2023 American Control Conference
Page Range / eLocation ID:
399-404
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Multiagent coordination is highly desirable with many uses in a variety of tasks. In nature, the phenomenon of coordinated flocking is highly common with applications related to defending or escaping from predators. In this article, a hybrid multiagent system that integrates consensus, cooperative learning, and flocking control to determine the direction of attacking predators and learns to flock away from them in a coordinated manner is proposed. This system is entirely distributed requiring only communication between neighboring agents. The fusion of consensus and collaborative reinforcement learning allows agents to cooperatively learn in a variety of multiagent coordination tasks, but this article focuses on flocking away from attacking predators. The results of the flocking show that the agents are able to effectively flock to a target without collision with each other or obstacles. Multiple reinforcement learning methods are evaluated for the task with cooperative learning utilizing function approximation for state-space reduction performing the best. The results of the proposed consensus algorithm show that it provides quick and accurate transmission of information between agents in the flock. Simulations are conducted to show and validate the proposed hybrid system in both one and two predator environments, resulting in an efficient cooperative learning behavior. In the future, the system of using consensus to determine the state and reinforcement learning to learn the states can be applied to additional multiagent tasks. 
    more » « less
  2. We present ResilienC, a framework for resilient control of Cyber- Physical Systems subject to STL-based requirements. ResilienC uti- lizes a recently developed formalism for specifying CPS resiliency in terms of sets of (rec,dur) real-valued pairs, where rec repre- sents the system’s capability to rapidly recover from a property violation (recoverability), and dur is reflective of its ability to avoid violations post-recovery (durability). We define the resilient STL control problem as one of multi-objective optimization, where the recoverability and durability of the desired STL specification are maximized. When neither objective is prioritized over the other, the solution to the problem is a set of Pareto-optimal system trajectories. We present a precise solution method to the resilient STL control problem using a mixed-integer linear programming encoding and an a posteriori n-constraint approach for efficiently retrieving the complete set of optimally resilient solutions. In ResilienC, at each time-step, the optimal control action selected from the set of Pareto- optimal solutions by a Decision Maker strategy realizes a form of Model Predictive Control. We demonstrate the practical utility of the ResilienC framework on two significant case studies: autonomous vehicle lane keeping and deadline-driven, multi-region package delivery. 
    more » « less
  3. We introduce the concept of Distributed Model Predictive Control (DMPC) with Acceleration-Weighted Neighborhooding (AWN) in order to synthesize a distributed and symmetric controller for high-speed flocking maneuvers (angular turns in general). Acceleration-Weighted Neighborhooding exploits the imbalance in agent accelerations during a turning maneuver to ensure that actively turning agents are prioritized. We show that with our approach, a flocking maneuver can be achieved without it being a global objective. Only a small subset of the agents, called initiators, need to be aware of the maneuver objective. Our AWN-DMPC controller ensures this local information is propagated throughout the flock in a scale-free manner with linear delays. Our experimental evaluation conclusively demonstrates the maneuvering capabilities of a distributed flocking controller based on AWN-DMPC. 
    more » « less
  4. A modified Legendre-Gauss-Radau collocation method is developed for solving optimal control problems whose solutions contain a nonsmooth optimal control. The method includes an additional variable that defines the location of nonsmoothness. In addition, collocation constraints are added at the end of a mesh interval that defines the location of nonsmoothness in the solution on each differential equation that is a function of control along with a control constraint at the endpoint of this same mesh interval. The transformed adjoint system for the modified Legendre-Gauss-Radau collocation method along with a relationship between the Lagrange multipliers of the nonlinear programming problem and a discrete approximation of the costate of the optimal control problem is then derived. Finally, it is shown via example that the new method provides an accurate approximation of the costate. 
    more » « less
  5. In this work, we propose a trajectory generation method for robotic systems with contact force constraint based on optimal control and reachability analysis. Normally, the dynamics and constraints of the contact-constrained robot are nonlinear and coupled to each other. Instead of linearizing the model and constraints, we directly solve the optimal control problem to obtain the feasible state trajectory and the control input of the system. A tractable optimal control problem is formulated which is addressed by dual approaches, which are sampling-based dynamic programming and rigorous reachability analysis. The sampling-based method and Partially Observable Markov Decision Process (POMDP) are used to break down the end-to-end trajectory generation problem via sample-wise optimization in terms of given conditions. The result generates sequential pairs of subregions to be passed to reach the final goal. The reachability analysis ensures that we will find at least one trajectory starting from a given initial state and going through a sequence of subregions. The distinctive contributions of our method are to enable handling the intricate contact constraint coupled with system’s dynamics due to the reduction of computational complexity of the algorithm. We validate our method using extensive numerical simulations with a legged robot. 
    more » « less