skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Optimal finite-time processes in weakly driven overdamped Brownian motion
Abstract The complete physical understanding of the optimization of the thermodynamic work still is an important open problem in stochastic thermodynamics. We address this issue using the Hamiltonian approach of linear response theory in finite time and weak processes. We derive the Euler–Lagrange equation associated and discuss its main features, illustrating them using the paradigmatic example of driven Brownian motion in overdamped regime. We show that the optimal protocols obtained either coincide, in the appropriate limit, with the exact solutions by stochastic thermodynamics or can be even identical to them, presenting the well-known jumps. However, our approach reveals that jumps at the extremities of the process are a good optimization strategy in the regime of fast but weak processes for any driven system. Additionally, we show that fast-but-weak optimal protocols are time-reversal symmetric, a property that has until now remained hidden in the exact solutions far from equilibrium.  more » « less
Award ID(s):
2010127
PAR ID:
10369700
Author(s) / Creator(s):
; ;
Publisher / Repository:
IOP Publishing
Date Published:
Journal Name:
Journal of Physics Communications
Volume:
6
Issue:
8
ISSN:
2399-6528
Page Range / eLocation ID:
Article No. 083001
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract This work proposes a general framework for analyzing noise-driven transitions in spatially extended non-equilibrium systems and explaining the emergence of coherent patterns beyond the instability onset. The framework relies on stochastic parameterization formulas to reduce the complexity of the original equations while preserving the essential dynamical effects of unresolved scales. The approach is flexible and operates for both Gaussian noise and non-Gaussian noise with jumps. Our stochastic parameterization formulas offer two key advantages. First, they can approximate stochastic invariant manifolds when these manifolds exist. Second, even when such manifolds break down, our formulas can be adapted through a simple optimization of its constitutive parameters. This allows us to handle scenarios with weak time-scale separation where the system has undergone multiple transitions, resulting in large-amplitude solutions not captured by invariant manifolds or other time-scale separation methods. The optimized stochastic parameterizations capture then how small-scale noise impacts larger scales through the system’s nonlinear interactions. This effect is achieved by the very fabric of our parameterizations incorporating non-Markovian (memory-dependent) coefficients into the reduced equation. These coefficients account for the noise’s past influence, not just its current value, using a finite memory length that is selected for optimal performance. The specific memory function, which determines how this past influence is weighted, depends on both the strength of the noise and how it interacts with the system’s nonlinearities. Remarkably, training our theory-guided reduced models on a single noise path effectively learns the optimal memory length for out-of-sample predictions. This approach retains indeed good accuracy in predicting noise-induced transitions, including rare events, when tested against a large ensemble of different noise paths. This success stems from our hybrid approach, which combines analytical understanding with data-driven learning. This combination avoids a key limitation of purely data-driven methods: their struggle to generalize to unseen scenarios, also known as the ‘extrapolation problem.’ 
    more » « less
  2. Abstract Stochastic thermodynamics lays down a broad framework to revisit the venerable concepts of heat, work and entropy production for individual stochastic trajectories of mesoscopic systems. Remarkably, this approach, relying on stochastic equations of motion, introduces time into the description of thermodynamic processes—which opens the way to fine control them. As a result, the field of finite-time thermodynamics of mesoscopic systems has blossomed. In this article, after introducing a few concepts of control for isolated mechanical systems evolving according to deterministic equations of motion, we review the different strategies that have been developed to realize finite-time state-to-state transformations in both over and underdamped regimes, by the proper design of time-dependent control parameters/driving. The systems under study are stochastic, epitomized by a Brownian object immersed in a fluid; they are thus strongly coupled to their environment playing the role of a reservoir. Interestingly, a few of those methods (inverse engineering, counterdiabatic driving, fast-forward) are directly inspired by their counterpart in quantum control. The review also analyzes the control through reservoir engineering. Besides the reachability of a given target state from a known initial state, the question of the optimal path is discussed. Optimality is here defined with respect to a cost function, a subject intimately related to the field of information thermodynamics and the question of speed limit. Another natural extension discussed deals with the connection between arbitrary states or non-equilibrium steady states. This field of control in stochastic thermodynamics enjoys a wealth of applications, ranging from optimal mesoscopic heat engines to population control in biological systems. 
    more » « less
  3. We study statistical inference and distributionally robust solution methods for stochastic optimization problems, focusing on confidence intervals for optimal values and solutions that achieve exact coverage asymptotically. We develop a generalized empirical likelihood framework—based on distributional uncertainty sets constructed from nonparametric f-divergence balls—for Hadamard differentiable functionals, and in particular, stochastic optimization problems. As consequences of this theory, we provide a principled method for choosing the size of distributional uncertainty regions to provide one- and two-sided confidence intervals that achieve exact coverage. We also give an asymptotic expansion for our distributionally robust formulation, showing how robustification regularizes problems by their variance. Finally, we show that optimizers of the distributionally robust formulations we study enjoy (essentially) the same consistency properties as those in classical sample average approximations. Our general approach applies to quickly mixing stationary sequences, including geometrically ergodic Harris recurrent Markov chains. 
    more » « less
  4. A stochastic algorithm is proposed, analyzed, and tested experimentally for solving continuous optimization problems with nonlinear equality constraints. It is assumed that constraint function and derivative values can be computed but that only stochastic approximations are available for the objective function and its derivatives. The algorithm is of the sequential quadratic optimization variety. Distinguishing features of the algorithm are that it only employs stochastic objective gradient estimates that satisfy a relatively weak set of assumptions (while using neither objective function values nor estimates of them) and that it allows inexact subproblem solutions to be employed, the latter of which is particularly useful in large-scale settings when the matrices defining the subproblems are too large to form and/or factorize. Conditions are imposed on the inexact subproblem solutions that account for the fact that only stochastic objective gradient estimates are employed. Convergence results are established for the method. Numerical experiments show that the proposed method vastly outperforms a stochastic subgradient method and can outperform an alternative sequential quadratic programming algorithm that employs highly accurate subproblem solutions in every iteration. Funding: This material is based upon work supported by the National Science Foundation [Awards CCF-1740796 and CCF-2139735] and the Office of Naval Research [Award N00014-21-1-2532]. 
    more » « less
  5. Sampling-based motion planners such as RRT* and BIT*, when applied to kinodynamic motion planning, rely on steering functions to generate time-optimal solutions connecting sampled states. Implementing exact steering functions requires either analytical solutions to the time-optimal control problem, or nonlinear programming (NLP) solvers to solve the boundary value problem given the system's kinodynamic equations. Unfortunately, analytical solutions are unavailable for many real-world domains, and NLP solvers are prohibitively computationally expensive, hence fast and optimal kinodynamic motion planning remains an open problem. We provide a solution to this problem by introducing State Supervised Steering Function (S3F), a novel approach to learn time-optimal steering functions. S3F is able to produce near-optimal solutions to the steering function orders of magnitude faster than its NLP counterpart. Experiments conducted on three challenging robot domains show that RRT* using S3F significantly outperforms state-of-the-art planning approaches on both solution cost and runtime. We further provide a proof of probabilistic completeness of RRT* modified to use S3F. 
    more » « less