skip to main content


Title: Understanding Edge-of-Stability Training Dynamics with a Minimalist Example
Recently, researchers observed that gradient descent for deep neural networks operates in an “edge-of-stability” (EoS) regime: the sharpness (maximum eigenvalue of the Hessian) is often larger than stability threshold 2/\eta (where \eta is the step size). Despite this, the loss oscillates and converges in the long run, and the sharpness at the end is just slightly below 2/\eta . While many other well-understood nonconvex objectives such as matrix factorization or two-layer networks can also converge despite large sharpness, there is often a larger gap between sharpness of the endpoint and 2/\eta . In this paper, we study EoS phenomenon by constructing a simple function that has the same behavior. We give rigorous analysis for its training dynamics in a large local region and explain why the fnal converging point has sharpness close to 2/\eta . Globally we observe that the training dynamics for our example have an interesting bifurcating behavior, which was also observed in the training of neural nets.  more » « less
Award ID(s):
1845171 2031849
NSF-PAR ID:
10409756
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
International Conference on Learning Representations
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Traditional analyses of gradient descent show that when the largest eigenvalue of the Hessian, also known as the sharpness S(θ), is bounded by 2/η, training is "stable" and the training loss decreases monotonically. Recent works, however, have observed that this assumption does not hold when training modern neural networks with full batch or large batch gradient descent. Most recently, Cohen et al. (2021) observed two important phenomena. The first, dubbed progressive sharpening, is that the sharpness steadily increases throughout training until it reaches the instability cutoff 2/η. The second, dubbed edge of stability, is that the sharpness hovers at 2/η for the remainder of training while the loss continues decreasing, albeit non-monotonically. We demonstrate that, far from being chaotic, the dynamics of gradient descent at the edge of stability can be captured by a cubic Taylor expansion: as the iterates diverge in direction of the top eigenvector of the Hessian due to instability, the cubic term in the local Taylor expansion of the loss function causes the curvature to decrease until stability is restored. This property, which we call self-stabilization, is a general property of gradient descent and explains its behavior at the edge of stability. A key consequence of self-stabilization is that gradient descent at the edge of stability implicitly follows projected gradient descent (PGD) under the constraint S(θ)≤2/η. Our analysis provides precise predictions for the loss, sharpness, and deviation from the PGD trajectory throughout training, which we verify both empirically in a number of standard settings and theoretically under mild conditions. Our analysis uncovers the mechanism for gradient descent's implicit bias towards stability. 
    more » « less
  2. Gustatory cortical (GC) single-neuron taste responses reflect taste quality and palatability in successive epochs. Ensemble analyses reveal epoch-to-epoch firing-rate changes in these responses to be sudden, coherent transitions. Such nonlinear dynamics suggest that GC is part of a recurrent network, producing these dynamics in concert with other structures. Basolateral amygdala (BLA), which is reciprocally connected to GC and central to hedonic processing, is a strong candidate partner for GC, in that BLA taste responses evolve on the same general clock as GC and because inhibition of activity in the BLA→GC pathway degrades the sharpness of GC transitions. These facts motivate, but do not test, our overarching hypothesis that BLA and GC act as a single, comodulated network during taste processing. Here, we provide just this test of simultaneous (BLA and GC) extracellular taste responses in female rats, probing the multiregional dynamics of activity to directly test whether BLA and GC responses contain coupled dynamics. We show that BLA and GC response magnitudes covary across trials and within single responses, and that changes in BLA–GC local field potential phase coherence are epoch specific. Such classic coherence analyses, however, obscure the most salient facet of BLA–GC coupling: sudden transitions in and out of the epoch known to be involved in driving gaping behavior happen near simultaneously in the two regions, despite huge trial-to-trial variability in transition latencies. This novel form of inter-regional coupling, which we show is easily replicated in model networks, suggests collective processing in a distributed neural network.

    SIGNIFICANCE STATEMENTThere has been little investigation into real-time communication between brain regions during taste processing, a fact reflecting the dominant belief that taste circuitry is largely feedforward. Here, we perform an in-depth analysis of real-time interactions between GC and BLA in response to passive taste deliveries, using both conventional coherence metrics and a novel methodology that explicitly considers trial-to-trial variability and fast single-trial dynamics in evoked responses. Our results demonstrate that BLA–GC coherence changes as the taste response unfolds, and that BLA and GC specifically couple for the sudden transition into (and out of) the behaviorally relevant neural response epoch, suggesting (although not proving) that: (1) recurrent interactions subserve the function of the dyad as (2) a putative attractor network.

     
    more » « less
  3. Obeid, Iyad ; Selesnick, Ivan ; Picone, Joseph (Ed.)
    Scalp electroencephalograms (EEGs) are the primary means by which phy-sicians diagnose brain-related illnesses such as epilepsy and seizures. Au-tomated seizure detection using clinical EEGs is a very difficult machine learning problem due to the low fidelity of a scalp EEG signal. Neverthe-less, despite the poor signal quality, clinicians can reliably diagnose ill-nesses from visual inspection of the signal waveform. Commercially avail-able automated seizure detection systems, however, suffer from unaccepta-bly high false alarm rates. Deep learning algorithms that require large amounts of training data have not previously been effective on this task due to the lack of big data resources necessary for building such models and the complexity of the signals involved. The evolution of big data science, most notably the release of the Temple University EEG (TUEG) Corpus, has mo-tivated renewed interest in this problem. In this chapter, we discuss the application of a variety of deep learning ar-chitectures to automated seizure detection. Architectures explored include multilayer perceptrons, convolutional neural networks (CNNs), long short-term memory networks (LSTMs), gated recurrent units and residual neural networks. We use the TUEG Corpus, supplemented with data from Duke University, to evaluate the performance of these hybrid deep structures. Since TUEG contains a significant amount of unlabeled data, we also dis-cuss unsupervised pre-training methods used prior to training these com-plex recurrent networks. Exploiting spatial and temporal context is critical for accurate disambigua-tion of seizures from artifacts. We explore how effectively several conven-tional architectures are able to model context and introduce a hybrid system that integrates CNNs and LSTMs. The primary error modalities observed by this state-of-the-art system were false alarms generated during brief delta range slowing patterns such as intermittent rhythmic delta activity. A varie-ty of these types of events have been observed during inter-ictal and post-ictal stages. Training models on such events with diverse morphologies has the potential to significantly reduce the remaining false alarms. This is one reason we are continuing our efforts to annotate a larger portion of TUEG. Increasing the data set size significantly allows us to leverage more ad-vanced machine learning methodologies. 
    more » « less
  4. null (Ed.)
    Abstract Neural networks are a promising technique for parameterizing subgrid-scale physics (e.g., moist atmospheric convection) in coarse-resolution climate models, but their lack of interpretability and reliability prevents widespread adoption. For instance, it is not fully understood why neural network parameterizations often cause dramatic instability when coupled to atmospheric fluid dynamics. This paper introduces tools for interpreting their behavior that are customized to the parameterization task. First, we assess the nonlinear sensitivity of a neural network to lower-tropospheric stability and the midtropospheric moisture, two widely studied controls of moist convection. Second, we couple the linearized response functions of these neural networks to simplified gravity wave dynamics, and analytically diagnose the corresponding phase speeds, growth rates, wavelengths, and spatial structures. To demonstrate their versatility, these techniques are tested on two sets of neural networks, one trained with a superparameterized version of the Community Atmosphere Model (SPCAM) and the second with a near-global cloud-resolving model (GCRM). Even though the SPCAM simulation has a warmer climate than the cloud-resolving model, both neural networks predict stronger heating/drying in moist and unstable environments, which is consistent with observations. Moreover, the spectral analysis can predict that instability occurs when GCMs are coupled to networks that support gravity waves that are unstable and have phase speeds larger than 5 m s −1 . In contrast, standing unstable modes do not cause catastrophic instability. Using these tools, differences between the SPCAM-trained versus GCRM-trained neural networks are analyzed, and strategies to incrementally improve both of their coupled online performance unveiled. 
    more » « less
  5. Generalization analyses of deep learning typically assume that the training converges to a fixed point. But, recent results indicate that in practice, the weights of deep neural networks optimized with stochastic gradient descent often oscillate indefinitely. To reduce this discrepancy between theory and practice, this paper focuses on the generalization of neural networks whose training dynamics do not necessarily converge to fixed points. Our main contribution is to propose a notion of statistical algorithmic stability (SAS) that extends classical algorithmic stability to non-convergent algorithms and to study its connection to generalization. This ergodic-theoretic approach leads to new insights when compared to the traditional optimization and learning theory perspectives. We prove that the stability of the time-asymptotic behavior of a learning algorithm relates to its generalization and empirically demonstrate how loss dynamics can provide clues to generalization performance. Our findings provide evidence that networks that “train stably generalize better” even when the training continues indefinitely and the weights do not converge. 
    more » « less