skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Solving Singular Control Problems in Mathematical Biology Using PASA
In this chapter, we demonstrate how to use a nonlinear polyhedral con- strained optimization solver called the Polyhedral Active Set Algorithm (PASA) for solving a general singular control problem. We present a method for discretizing a general optimal control problem involving the use of the gradient of the Lagrangian for computing the gradient of the cost functional so that PASA can be applied. When a numerical solu- tion contains artifacts that resemble “chattering,” a phenomenon where the control oscillates wildly along the singular region, we recommend a method of regularizing the singular control problem by adding a term to the cost functional that measures a scalar multiple of the total variation of the control, where the scalar is viewed as a tuning parameter. We then demonstrate PASA’s performance on three singular control problems that give rise to different applications of mathematical biology. We also provide some exposition on the heuristics that we use in determining an appropriate size for the tuning parameter.  more » « less
Award ID(s):
2031213 1522629 1819002
PAR ID:
10564712
Author(s) / Creator(s):
; ; ;
Editor(s):
Tuncer, N; Martcheva, M; Prosper, O; Childs, L
Publisher / Repository:
World Scientific Publishing Company
Date Published:
Volume:
Computational and Mathematical Population Dynamics,
Issue:
Chapter 9
ISBN:
978-981-12-6302-6
Page Range / eLocation ID:
319-419
Subject(s) / Keyword(s):
Singular control Total variation Bounded variation Regularization Pontryagin’s Minimum Principle Switching function Fishery problem Plant problem SIR problem
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Matni, Nikolai; Morari, Manfred; Pappas, George J. (Ed.)
    Controller tuning is a vital step to ensure a controller delivers its designed performance. DiffTune has been proposed as an automatic tuning method that unrolls the dynamical system and controller into a computational graph and uses auto-differentiation to obtain the gradient for the controller’s parameter update. However, DiffTune uses the vanilla gradient descent to iteratively update the parameter, in which the performance largely depends on the choice of the learning rate (as a hyperparameter). In this paper, we propose to use hyperparameter-free methods to update the controller parameters. We find the optimal parameter update by maximizing the loss reduction, where a predicted loss based on the approximated state and control is used for the maximization. Two methods are proposed to optimally update the parameters and are compared with related variants in simulations on a Dubin’s car and a quadrotor. Simulation experiments show that the proposed first-order method outperforms the hyperparameter-based methods and is more robust than the second-order hyperparameter-free methods. 
    more » « less
  2. In this paper, we study the finite-sum convex optimization problem focusing on the general convex case. Recently, the study of variance reduced (VR) methods and their accelerated variants has made exciting progress. However, the step size used in the existing VR algorithms typically depends on the smoothness parameter, which is often unknown and requires tuning in practice. To address this problem, we propose two novel adaptive VR algorithms: Adaptive Variance Reduced Accelerated Extra-Gradient (AdaVRAE) and Adaptive Variance Reduced Accelerated Gradient (AdaVRAG). Our algorithms do not require knowledge of the smoothness parameter. AdaVRAE uses $$\mathcal{O}\left(n\log\log n+\sqrt{\frac{n\beta}{\epsilon}}\right)$$ and AdaVRAG uses $$\mathcal{O}\left(n\log\log n+\sqrt{\frac{n\beta\log\beta}{\epsilon}}\right)$$ gradient evaluations to attain an $$\mathcal{O}(\epsilon)$$-suboptimal solution, where $$n$$ is the number of functions in the finite sum and $$\beta$$ is the smoothness parameter. This result matches the best-known convergence rate of non-adaptive VR methods and it improves upon the convergence of the state of the art adaptive VR method, AdaSVRG. We demonstrate the superior performance of our algorithms compared with previous methods in experiments on real-world datasets. 
    more » « less
  3. The purpose of this paper is to develop a practical strategy to accelerate Newton’s method in the vicinity of singular points. We present an adaptive safeguarding scheme with a tunable parameter, which we call adaptive γ-safeguarding, that one can use in tandem with Anderson acceleration to improve the performance of Newton’s method when solving problems at or near singular points. The key features of adaptive γ-safeguarding are that it converges locally for singular problems, and it can detect nonsingular problems automatically, in which case the Newton-Anderson iterates are scaled towards a standard Newton step. The result is a flexible algorithm that performs well for singular and nonsingular problems, and can recover convergence from both standard Newton and Newton-Anderson with the right parameter choice. This leads to faster local convergence compared to both Newton’s method, and Newton-Anderson without safeguarding, with effectively no additional computational cost. We demonstrate three strategies one can use when implementing Newton-Anderson and γ-safeguarded Newton-Anderson to solve parameter-dependent problems near singular points. For our benchmark problems, we take two parameter-dependent incompressible flow systems: flow in a channel and Rayleigh-Benard convection. 
    more » « less
  4. Abstract This article presents an extremum‐seeking control (ESC) algorithm for unmodeled nonlinear systems with known steady‐state gain and generally non‐convex cost functions with bounded curvature. The main contribution of this article is a novel gradient estimator, which uses a polyhedral set that characterizes all gradient estimates consistent with the collected data. The gradient estimator is posed as a quadratic program, which selects the gradient estimate that provides the best worst‐case convergence of the closed‐loop Lyapunov function. We show that the polyhedral‐based gradient estimator ensures the stability of the closed‐loop system formed by the plant and optimization algorithm. Furthermore, the estimated gradient provably produces the optimal robust convergence. We demonstrate our ESC controller through three benchmark examples and one practical example, which shows our ESC has fast and robust convergence to the optimal equilibrium. 
    more » « less
  5. Abstract This paper is motivated by studying differential brain activities to multiple experimental condition presentations in intracranial electroencephalography (iEEG) experiments. Contrasting effects of experimental conditions are often zero in most regions and nonzero in some local regions, yielding locally sparse functions. Such studies are essentially a function-on-scalar regression problem, with interest being focused not only on estimating nonparametric functions but also on recovering the function supports. We propose a weighted group bridge approach for simultaneous function estimation and support recovery in function-on-scalar mixed effect models, while accounting for heterogeneity present in functional data. We use B-splines to transform sparsity of functions to its sparse vector counterpart of increasing dimension, and propose a fast nonconvex optimization algorithm using nested alternative direction method of multipliers (ADMM) for estimation. Large sample properties are established. In particular, we show that the estimated coefficient functions are rate optimal in the minimax sense under the L2 norm and resemble a phase transition phenomenon. For support estimation, we derive a convergence rate under the norm that leads to a selection consistency property under δ-sparsity, and obtain a result under strict sparsity using a simple sufficient regularity condition. An adjusted extended Bayesian information criterion is proposed for parameter tuning. The developed method is illustrated through simulations and an application to a novel iEEG data set to study multisensory integration. 
    more » « less