skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2205837

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract One of the more popular optimization methods in current use is the Adam optimizer. This is due, at least in part, to its effectiveness as a training algorithm for deep neural networks, which are associated with many machine learning tasks. In this paper, we introduce time delays into the Adam optimizer. Time delays typically have an adverse effect on dynamical systems, including optimizers, slowing the system’s rate of convergence and potentially causing instabilities. However, our numerical experiments indicate that introducing time-delays into the Adam optimizer can significantly improve its performance, resulting in an often much smaller loss-value. Perhaps more surprising is that this improvement often scales with dimension-the higher the dimension the greater the advantage of using time delays in improving loss-values. Along with describing these results we show that, for the time-delays we consider, the temporal complexity of the delayed Adam optimizer remains the same as the undelayed optimizer and that the algorithm’s spatial complexity scales linearly in the length of the largest time-delay. Last, we extend the theory of intrinsic stability to give a criterion under which the minima, either local or global, associated with the delayed Adam optimizer are stable. 
    more » « less
    Free, publicly-accessible full text available July 10, 2026
  2. Free, publicly-accessible full text available November 1, 2025
  3. A reservoir computer is a machine learning model that can be used to predict the future state(s) of time-dependent processes, e.g., dynamical systems. In practice, data in the form of an input-signal are fed into the reservoir. The trained reservoir is then used to predict the future state of this signal. We develop a new method for not only predicting the future dynamics of the input-signal but also the future dynamics starting at an arbitrary initial condition of a system. The systems we consider are the Lorenz, Rossler, and Thomas systems restricted to their attractors. This method, which creates a global forecast, still uses only a single input-signal to train the reservoir but breaks the signal into many smaller windowed signals. We examine how well this windowed method is able to forecast the dynamics of a system starting at an arbitrary point on a system’s attractor and compare this to the standard method without windows. We find that the standard method has almost no ability to forecast anything but the original input-signal while the windowed method can capture the dynamics starting at most points on an attractor with significant accuracy. 
    more » « less