skip to main content


Title: Scheduling Multiple Agents in a Persistent Monitoring Task Using Reachability Analysis
We consider the problem of controlling the dynamic state of each of a finite collection of targets distributed in physical space using a much smaller collection of mobile agents. Each agent can attend to no more than one target at a given time, thus agents must move between targets to control the collective state, implying that the states of each of the individual targets are only controlled intermittently. We assume that the state dynamics of each of the targets are given by a linear, timeinvariant, controllable system and develop conditions on the visiting schedules of the agents to ensure that the property of controllability is maintained in the face of the intermittent control. We then introduce constraints on the magnitude of the control input and a bounded disturbance into the target dynamics and develop a method to evaluate system performance under this scenario. Finally, we use this method to determine how the amount of time the agents spend at a given target before switching to the next in its sequence influences  more » « less
Award ID(s):
1509084 1562031 1645681
PAR ID:
10122844
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE Transactions on Automatic Control
ISSN:
0018-9286
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this article, we consider the problem of stabilizing a class of degenerate stochastic processes, which are constrained to a bounded Euclidean domain or a compact smooth manifold, to a given target probability density. This stabilization problem arises in the field of swarm robotics, for example, in applications where a swarm of robots is required to cover an area according to a target probability density. Most existing works on modeling and control of robotic swarms that use partial differential equation (PDE) models assume that the robots' dynamics are holonomic and, hence, the associated stochastic processes have generators that are elliptic. We relax this assumption on the ellipticity of the generator of the stochastic processes, and consider the more practical case of the stabilization problem for a swarm of agents whose dynamics are given by a controllable driftless control-affine system. We construct state-feedback control laws that exponentially stabilize a swarm of nonholonomic agents to a target probability density that is sufficiently regular. State-feedback laws can stabilize a swarm only to target probability densities that are positive everywhere. To stabilize the swarm to probability densities that possibly have disconnected supports, we introduce a semilinear PDE model of a collection of interacting agents governed by a hybrid switching diffusion process. The interaction between the agents is modeled using a (mean-field) feedback law that is a function of the local density of the swarm, with the switching parameters as the control inputs. We show that under the action of this feedback law, the semilinear PDE system is globally asymptotically stable about the given target probability density. The stabilization strategies with and without agent interactions are verified numerically for agents that evolve according to the Brockett integrator; the strategy with interactions is additionally verified for agents that evolve according to an underactuated s... 
    more » « less
  2. In this paper, we investigate how the self-synchronization property of a swarm of Kuramoto oscillators can be controlled and exploited to achieve target densities and target phase coherence. In the limit of an infinite number of oscillators, the collective dynamics of the agents’ density is described by a mean-field model in the form of a nonlocal PDE, where the nonlocality arises from the synchronization mechanism. In this mean-field setting, we introduce two space-time dependent control inputs to affect the density of the oscillators: an angular velocity field that corresponds to a state feedback law for individual agents, and a control parameter that modulates the strength of agent interactions over space and time, i.e., a multiplicative control with respect to the integral nonlocal term. We frame the density tracking problem as a PDE-constrained optimization problem. The controlled synchronization and phase-locking are measured with classical polar order metrics. After establishing the mass conservation property of the mean-field model and bounds on its nonlocal term, a system of first-order necessary conditions for optimality is recovered using a Lagrangian method. The optimality system, comprising a nonlocal PDE for the state dynamics equation, the respective nonlocal adjoint dynamics, and the Euler equation, is solved iteratively following a standard Optimize-then-Discretize approach and an efficient numerical solver based on spectral methods. We demonstrate our approach for each of the two control inputs in simulation. 
    more » « less
  3. We develop an optimization-based framework for joint real-time trajectory planning and feedback control of feedback-linearizable systems. To achieve this goal, we define a target trajectory as the optimal solution of a time-varying optimization problem. In general, however, such trajectory may not be feasible due to , e.g., nonholonomic constraints. To solve this problem, we design a control law that generates feasible trajectories that asymptotically converge to the target trajectory. More precisely, for systems that are (dynamic) full-state linearizable, the proposed control law implicitly transforms the nonlinear system into an optimization algorithm of sufficiently high order. We prove global exponential convergence to the target trajectory for both the optimization algorithm and the original system. We illustrate the effectiveness of our proposed method on multi-target or multi-agent tracking problems with constraints. 
    more » « less
  4. This paper introduces a strategy for satisfying basic control objectives for systems whose dynamics are almost entirely unknown. This setting is motivated by a scenario where a system undergoes a critical failure, thus significantly changing its dynamics. In such a case, retaining the ability to satisfy basic control objectives such as reach-avoid is imperative. To deal with significant restrictions on our knowledge of system dynamics, we develop a theory of myopic control. The primary goal of myopic control is to, at any given time, optimize the current direction of the system trajectory, given solely the limited information obtained about the system until that time. Building upon this notion, we propose a control algorithm which simultaneously uses small perturbations in the control effort to learn local system dynamics while moving in the direction which seems to be optimal based on previously obtained knowledge. We show that the algorithm results in a trajectory that is nearly optimal in the myopic sense, i.e., it is moving in a direction that seems to be nearly the best at the given time, and provide formal bounds for suboptimality. We demonstrate the usefulness of the proposed algorithm on a high-fidelity simulation of a damaged Boeing 747 seeking to remain in level flight. 
    more » « less
  5. Doty, David ; Spirakis, Paul (Ed.)
    We develop a framework for self-induced phase changes in programmable matter in which a collection of agents with limited computational and communication capabilities can collectively perform appropriate global tasks in response to local stimuli that dynamically appear and disappear. Agents reside on graph vertices, where each stimulus is only recognized locally, and agents communicate via token passing along edges to alert other agents to transition to an Aware state when stimuli are present and an Unaware state when the stimuli disappear. We present an Adaptive Stimuli Algorithm that is robust to competing waves of messages as multiple stimuli change, possibly adversarially. Moreover, in addition to handling arbitrary stimulus dynamics, the algorithm can handle agents reconfiguring the connections (edges) of the graph over time in a controlled way. As an application, we show how this Adaptive Stimuli Algorithm on reconfigurable graphs can be used to solve the foraging problem, where food sources may be discovered, removed, or shifted at arbitrary times. We would like the agents to consistently self-organize, using only local interactions, such that if the food remains in a position long enough, the agents transition to a gather phase in which many collectively form a single large component with small perimeter around the food. Alternatively, if no food source has existed recently, the agents should undergo a self-induced phase change and switch to a search phase in which they distribute themselves randomly throughout the lattice region to search for food. Unlike previous approaches to foraging, this process is indefinitely repeatable, withstanding competing waves of messages that may interfere with each other. Like a physical phase change, microscopic changes such as the deletion or addition of a single food source trigger these macroscopic, system-wide transitions as agents share information about the environment and respond locally to get the desired collective response. 
    more » « less