In this paper, we propose a new algorithm for solving convex-concave saddle-point problems using regret minimization in the repeated game framework. To do so, we introduce the Conic Blackwell Algorithm + ([Formula: see text]), a new parameter- and scale-free regret minimizer for general convex compact sets. [Formula: see text] is based on Blackwell approachability and attains [Formula: see text] regret. We show how to efficiently instantiate [Formula: see text] for many decision sets of interest, including the simplex, [Formula: see text] norm balls, and ellipsoidal confidence regions in the simplex. Based on [Formula: see text], we introduce [Formula: see text], a new parameter-free algorithm for solving convex-concave saddle-point problems achieving a [Formula: see text] ergodic convergence rate. In our simulations, we demonstrate the wide applicability of [Formula: see text] on several standard saddle-point problems from the optimization and operations research literature, including matrix games, extensive-form games, distributionally robust logistic regression, and Markov decision processes. In each setting, [Formula: see text] achieves state-of-the-art numerical performance and outperforms classical methods, without the need for any choice of step sizes or other algorithmic parameters. Funding: J. Grand-Clément is supported by the Agence Nationale de la Recherche [Grant 11-LABX-0047] and by Hi! Paris. C. Kroer is supported by the Office of Naval Research [Grant N00014-22-1-2530] and by the National Science Foundation [Grant IIS-2147361].
more »
« less
This content will become publicly available on March 1, 2026
Adjoint-Based Design Optimization of Stability Constrained Systems
A partial differential equation (PDE) constrained design optimization problem usually optimizes a characteristic of a dynamical system around an equilibrium point. However, a commonly omitted constraint is the linear stability constraint at the equilibrium point, which undermines the optimized solution’s applicability. To enforce the linear stability constraint in practical gradient-based optimization, the derivatives must be computed accurately, and their computational cost must scale favorably with the number of design variables. In this paper, we propose an algorithm based on the coupled adjoint method and the algorithmic differentiation method that can compute the derivative of such constraint accurately and efficiently. We verify the proposed method using several simple low-dimensional dynamical systems. The relative difference between the adjoint method and the finite differences is between [Formula: see text] to [Formula: see text]. The proposed method is demonstrated through several optimizations, including a nonlinear aeroelastic optimization. The proposed algorithm has the potential to be applied to more complex problems involving large-scale nonlinear PDEs, such as aircraft flutter and buffet suppression.
more »
« less
- Award ID(s):
- 2223670
- PAR ID:
- 10629984
- Publisher / Repository:
- AIAA
- Date Published:
- Journal Name:
- AIAA Journal
- Volume:
- 63
- Issue:
- 3
- ISSN:
- 0001-1452
- Page Range / eLocation ID:
- 1036 to 1048
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We study the impact of nonconvexity on the complexity of nonsmooth optimization, emphasizing objectives such as piecewise linear functions, which may not be weakly convex. We focus on a dimension-independent analysis, slightly modifying a 2020 black-box algorithm of Zhang-Lin-Jegelka-Sra-Jadbabaie that approximates an ϵ-stationary point of any directionally differentiable Lipschitz objective using [Formula: see text] calls to a specialized subgradient oracle and a randomized line search. Seeking by contrast a deterministic method, we present a simple black-box version that achieves [Formula: see text] for any difference-of-convex objective and [Formula: see text] for the weakly convex case. Our complexity bound depends on a natural nonconvexity modulus that is related, intriguingly, to the negative part of directional second derivatives of the objective, understood in the distributional sense. Funding: This work was supported by the National Science Foundation [Grant DMS-2006990].more » « less
-
In this paper, we study the generalized subdifferentials and the Riemannian gradient subconsistency that are the basis for non-Lipschitz optimization on embedded submanifolds of [Formula: see text]. We then propose a Riemannian smoothing steepest descent method for non-Lipschitz optimization on complete embedded submanifolds of [Formula: see text]. We prove that any accumulation point of the sequence generated by the Riemannian smoothing steepest descent method is a stationary point associated with the smoothing function employed in the method, which is necessary for the local optimality of the original non-Lipschitz problem. We also prove that any accumulation point of the sequence generated by our method that satisfies the Riemannian gradient subconsistency is a limiting stationary point of the original non-Lipschitz problem. Numerical experiments are conducted to demonstrate the advantages of Riemannian [Formula: see text] [Formula: see text] optimization over Riemannian [Formula: see text] optimization for finding sparse solutions and the effectiveness of the proposed method. Funding: C. Zhang was supported in part by the National Natural Science Foundation of China [Grant 12171027] and the Natural Science Foundation of Beijing [Grant 1202021]. X. Chen was supported in part by the Hong Kong Research Council [Grant PolyU15300219]. S. Ma was supported in part by the National Science Foundation [Grants DMS-2243650 and CCF-2308597], the UC Davis Center for Data Science and Artificial Intelligence Research Innovative Data Science Seed Funding Program, and a startup fund from Rice University.more » « less
-
Prussian blue analogs (PBAs) are an important material class for aqueous electrochemical separations and energy storage owing to their ability to reversibly intercalate monovalent cations. However, incorporating interstitial [Formula: see text] molecules in the ab initio study of PBAs is technically challenging, though essential to understanding the interactions between interstitial water, interstitial cations, and the framework lattice that affect intercalation potential and cation intercalation selectivity. Accordingly, we introduce and use a method that combines the efficiency of machine-learning models with the accuracy of ab initio calculations to elucidate mechanisms of (1) lattice expansion upon intercalation of cations of different sizes, (2) selectivity bias toward intercalating hydrophobic cations of large size, and (3) semiconductor–conductor transitions from anhydrous to hydrated lattices. We analyze the PBA nickel hexacyanoferrate [[Formula: see text]] due to its structural stability and electrochemical activity in aqueous electrolytes. Here, grand potential analysis is used to determine the equilibrium degree of hydration for a given intercalated cation (Na[Formula: see text], K[Formula: see text], or Cs[Formula: see text]) and [Formula: see text] oxidation state based on pressure-equilibrated structures determined with the aid of machine learning and simulated annealing. The results imply new directions for the rational design of future cation-intercalation electrode materials that optimize performance in various electrochemical applications, and they demonstrate the importance of choosing an appropriate calculation framework to predict the properties of PBA lattices accurately.more » « less
-
We study optimal design problems in which the goal is to choose a set of linear measurements to obtain the most accurate estimate of an unknown vector. We study the [Formula: see text]-optimal design variant where the objective is to minimize the average variance of the error in the maximum likelihood estimate of the vector being measured. We introduce the proportional volume sampling algorithm to obtain nearly optimal bounds in the asymptotic regime when the number [Formula: see text] of measurements made is significantly larger than the dimension [Formula: see text] and obtain the first approximation algorithms whose approximation factor does not degrade with the number of possible measurements when [Formula: see text] is small. The algorithm also gives approximation guarantees for other optimal design objectives such as [Formula: see text]-optimality and the generalized ratio objective, matching or improving the previously best-known results. We further show that bounds similar to ours cannot be obtained for [Formula: see text]-optimal design and that [Formula: see text]-optimal design is NP-hard to approximate within a fixed constant when [Formula: see text].more » « less
An official website of the United States government
