Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Not AvailablePiecewise-Deterministic Markov processes (PDMPs) are often used to model abrupt changes in the global environment or capabilities of a controlled system. This is typically done by considering a set of “operating modes” (each with its own system dynamics and performance metrics) and assuming that the mode can switch stochastically, while the system state evolves. Such models have a broad range of applications in engineering, economics, manufacturing, robotics, and biological sciences. Here, we introduce and analyze an “occasionally observed” version of mode-switching PDMPs. We show how such systems can be controlled optimally if the planner is not alerted to mode switches as they occur but may instead have access to infrequent mode observations. We first develop a general framework for handling this through dynamic programming on a higher dimensional mode-belief space. While quite general, this method is rarely practical due to the curse of dimensionality. We then discuss assumptions that allow for solving the same problem much more efficiently,with the computational costs growing linearly (rather than exponentially) with the number of modes. We use this approach to derive Hamilton-Jacobi-Bellman (HJB) PDEs and quasi-variational inequalities encoding the optimal behavior for a variety of planning horizons (fixed, infinite, indefinite, and random) and mode-observation schemes (at fixed times or on demand). We discuss the computational challenges associated with each version and illustrate the resulting methods on test problems from surveillance-evading path planning. We also include an example based on robotic navigation: a Mars rover that minimizes the expected time to target while accounting for the possibility of unobserved/incremental damages and dynamics-altering breakdowns.more » « lessFree, publicly-accessible full text available September 10, 2026
-
When traveling through a graph with an accessible deterministic path to a target, is it ever preferable to resort to stochastic node-to-node transitions instead? And, if so, what are the conditions guaranteeing that such a stochastic optimal routing policy can be computed efficiently? We aim to answer these questions here by defining a class of Opportunistically Stochastic Shortest Path (OSSP) problems and deriving sufficient conditions for applicability of noniterative label-setting methods. The usefulness of this framework is demonstrated in two very different contexts: numerical analysis and autonomous vehicle routing. We use OSSPs to derive causality conditions for semi-Lagrangian discretizations of anisotropic Hamilton-Jacobi equations. We also use a Dijkstra-like method to solve OSSPs, optimizing the timing and urgency of lane change maneuvers for an autonomous vehicle navigating road networks with a heterogeneous traffic load. Funding: Financial support from the Air Force Office of Scientific Research [Grant FA9550-22-1-0528], the Division of Mathematical Sciences [Grants 1645643 and 2111522], and a National Defense Science and Engineering Graduate Fellowship is gratefully acknowledged.more » « lessFree, publicly-accessible full text available June 26, 2026
-
Antagonistic interactions are critical determinants of microbial community stability and composition, offering host benefits such as pathogen protection and providing avenues for antimicrobial control. While the ability to eliminate competitors confers an advantage to antagonistic microbes, it often incurs a fitness cost. Consequently, many microbes only produce toxins or engage in antagonistic behavior in response to specific cues like quorum sensing molecules or environmental stress. In laboratory settings, antagonistic microbes typically dominate over sensitive ones, raising the question of why both antagonistic and nonantagonistic microbes are found in natural environments and host microbiomes. Here, using both theoretical models and experiments with killer strains ofSaccharomyces cerevisiae, we show that “boom-and-bust” dynamics—periods of rapid growth punctuated by episodic mortality events—caused by temporal environmental fluctuations can favor nonantagonistic microbes that do not incur the growth rate cost of toxin production. Additionally, using control theory, we derive bounds on the competitive performance and identify optimal regulatory toxin-production strategies in various boom-and-bust environments where population dilutions occur either deterministically or stochastically over time. Our mathematical investigation reveals that optimal toxin regulation is much more beneficial to killers in stochastic, rather than deterministic, boom-and-bust environments. Overall, our findings show how both antagonistic and nonantagonistic microbes can thrive under varying environmental conditions.more » « less
-
Although adaptive cancer therapy shows promise in integrating evolutionary dynamics into treatment scheduling, the stochastic nature of cancer evolution has seldom been taken into account. Various sources of random perturbations can impact the evolution of heterogeneous tumors, making performance metrics of any treatment policy random as well. In this paper, we propose an efficient method for selecting optimal adaptive treatment policies under randomly evolving tumor dynamics. The goal is to improve the cumulative “cost” of treatment, a combination of the total amount of drugs used and the total treatment time. As this cost also becomes random in any stochastic setting, we maximize the probability of reaching the treatment goals (tumor stabilization or eradication) without exceeding a pre-specified cost threshold (or a “budget”). We use a novel Stochastic Optimal Control formulation and Dynamic Programming to find such “threshold-aware” optimal treatment policies. Our approach enables an efficient algorithm to compute these policies for a range of threshold values simultaneously. Compared to treatment plans shown to be optimal in a deterministic setting, the new “threshold-aware” policies significantly improve the chances of the therapy succeeding under the budget, which is correlated with a lower general drug usage. We illustrate this method using two specific examples, but our approach is far more general and provides a new tool for optimizing adaptive therapies based on a broad range of stochastic cancer models.more » « less
-
Antagonistic interactions are critical determinants of microbial community stability and composition, offering host benefits such as pathogen protection and providing avenues for antimicrobial control. While the ability to eliminate competitors confers an advantage to antagonistic microbes, it often incurs a fitness cost. Consequently, many microbes only produce toxins or engage in antagonistic behavior in response to specific cues like quorum sensing molecules or environmental stress. In laboratory settings, antagonistic microbes typically dominate over sensitive ones, raising the question of why both antagonistic and non-antagonistic microbes are found in natural environments and host microbiomes. Here, using both theoretical models and experiments with killer strains ofSaccharomyces cerevisiae, we show that boom-and-bust dynamics caused by temporal environmental fluctuations can favor non-antagonistic microbes that do not incur the growth rate cost of toxin production. Additionally, using control theory, we derive bounds on the competitive performance and identify optimal regulatory toxin-production strategies in various boom- and-bust environments where population dilutions occur either deterministically or stochastically over time. Our findings offer a new perspective on how both antagonistic and non-antagonistic microbes can thrive under varying environmental conditions.more » « less
-
Ruiz, Francisco; Dy, Jennifer; van de Meent, Jan-Willem (Ed.)We consider a task of surveillance-evading path-planning in a continuous setting. An Evader strives to escape from a 2D domain while minimizing the risk of detection (and immediate capture). The probability of detection is path-dependent and determined by the spatially inhomogeneous surveillance intensity, which is fixed but a priori unknown and gradually learned in the multi-episodic setting. We introduce a Bayesian reinforcement learning algorithm that relies on a Gaussian Process regression (to model the surveillance intensity function based on the information from prior episodes), numerical methods for Hamilton-Jacobi PDEs (to plan the best continuous trajectories based on the current model), and Confidence Bounds (to balance the exploration vs exploitation). We use numerical experiments and regret metrics to highlight the significant advantages of our approach compared to traditional graph-based algorithms of reinforcement learning.more » « less
-
Not AvailableSailboat path-planning is a natural hybrid control problem (due to continuous steering and occasional “tack-switching” maneuvers), with the actual path-to-target greatly affected by stochastically evolving wind conditions. Previous studies have focused on finding risk-neutral policies that minimize the expected time of arrival. In contrast, we present a robust control approach, which maximizes the probability of arriving before a specified deadline/threshold. Our numerical method recovers the optimal risk-aware (and threshold-specific) policies for all initial sailboat positions and a broad range of thresholds simultaneously. This is accomplished by solving two quasi-variational inequalities based on second-order Hamilton-Jacobi-Bellman (HJB) PDEs with degenerate parabolicity. Monte-Carlo simulations show that risk-awareness in sailing is particularly useful when a carefully calculated bet on the evolving wind direction might yield a reduction in the number of tack-switches.more » « less
An official website of the United States government

Full Text Available