Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract We consider a spatial model of cancer in which cells are points on thed-dimensional torus$$\mathcal{T}=[0,L]^d$$, and each cell with$$k-1$$mutations acquires akth mutation at rate$$\mu_k$$. We assume that the mutation rates$$\mu_k$$are increasing, and we find the asymptotic waiting time for the first cell to acquirekmutations as the torus volume tends to infinity. This paper generalizes results on waiting for$$k\geq 3$$mutations in Fooet al.(2020), which considered the case in which all of the mutation rates$$\mu_k$$are the same. In addition, we find the limiting distribution of the spatial distances between mutations for certain values of the mutation rates.more » « less
-
Abstract Consider a knotKin$$S^3$$ with uniformly distributed electric charge. Whilst solutions to the Laplace equation in terms of Dirichlet integrals are readily available, it is still of theoretical and physical interest to understand the qualitative behavior of the potential, particularly with respect to critical points and equipotential surfaces. In this paper, we demonstrate how techniques from geometric topology can yield novel insights from the perspective of electrostatics. Specifically, we show that when the knot is sufficiently close to a planar projection, we get a lower bound on the size of the critical set based on the projection’s crossings, improving a 2021 result of the author. We then classify the equipotential surfaces of a charged knot distribution by tracking how the topology of the knot complement restricts the Morse surgeries associated to the critical points of the potential.more » « less
-
Free, publicly-accessible full text available August 8, 2025
-
Although adaptive cancer therapy shows promise in integrating evolutionary dynamics into treatment scheduling, the stochastic nature of cancer evolution has seldom been taken into account. Various sources of random perturbations can impact the evolution of heterogeneous tumors, making performance metrics of any treatment policy random as well. In this paper, we propose an efficient method for selecting optimal adaptive treatment policies under randomly evolving tumor dynamics. The goal is to improve the cumulative “cost” of treatment, a combination of the total amount of drugs used and the total treatment time. As this cost also becomes random in any stochastic setting, we maximize the probability of reaching the treatment goals (tumor stabilization or eradication) without exceeding a pre-specified cost threshold (or a “budget”). We use a novel Stochastic Optimal Control formulation and Dynamic Programming to find such “threshold-aware” optimal treatment policies. Our approach enables an efficient algorithm to compute these policies for a range of threshold values simultaneously. Compared to treatment plans shown to be optimal in a deterministic setting, the new “threshold-aware” policies significantly improve the chances of the therapy succeeding under the budget, which is correlated with a lower general drug usage. We illustrate this method using two specific examples, but our approach is far more general and provides a new tool for optimizing adaptive therapies based on a broad range of stochastic cancer models.more » « less
-
Ruiz, Francisco; Dy, Jennifer; van de Meent, Jan-Willem (Ed.)We consider a task of surveillance-evading path-planning in a continuous setting. An Evader strives to escape from a 2D domain while minimizing the risk of detection (and immediate capture). The probability of detection is path-dependent and determined by the spatially inhomogeneous surveillance intensity, which is fixed but a priori unknown and gradually learned in the multi-episodic setting. We introduce a Bayesian reinforcement learning algorithm that relies on a Gaussian Process regression (to model the surveillance intensity function based on the information from prior episodes), numerical methods for Hamilton-Jacobi PDEs (to plan the best continuous trajectories based on the current model), and Confidence Bounds (to balance the exploration vs exploitation). We use numerical experiments and regret metrics to highlight the significant advantages of our approach compared to traditional graph-based algorithms of reinforcement learning.more » « less