skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Lion secretly solves constrained optimization: As lyapunov predicts
Lion (Evolved Sign Momentum), a new optimizer discovered through program search, has shown promising results in training large AI models. It performs comparably or favorably to AdamW but with greater memory efficiency. As we can expect from the results of a random search program, Lion incorporates elements from several existing algorithms, including signed momentum, decoupled weight decay, Polak, and Nesterov momentum, but does not fit into any existing category of theoretically grounded optimizers. Thus, even though Lion appears to perform well as a general-purpose optimizer for a wide range of tasks, its theoretical basis remains uncertain. This lack of theoretical clarity limits opportunities to further enhance and expand Lion's efficacy. This work aims to demystify Lion. Based on both continuous-time and discrete-time analysis, we demonstrate that Lion is a theoretically novel and principled approach for minimizing a general loss function $f(x)$ while enforcing a bound constraint $$\norm{x}_\infty \leq 1/\lambda$$. Lion achieves this through the incorporation of decoupled weight decay, where $$\lambda$$ represents the weight decay coefficient. Our analysis is made possible by the development of a new Lyapunov function for the Lion updates. It applies to a broader family of Lion-$$\phi$$ algorithms, where the $$\text{sign}(\cdot)$$ operator in Lion is replaced by the subgradient of a convex function $$\phi$$, leading to the solution of a general composite optimization problem of $$\min_x f(x) + \phi^*(x)$$. Our findings provide valuable insights into the dynamics of Lion and pave the way for further improvements and extensions of Lion-related algorithms.  more » « less
Award ID(s):
1846421 2037267
PAR ID:
10539920
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
International Conference of Learning Representations
Date Published:
ISBN:
9781713872740
Format(s):
Medium: X
Location:
International Conference of Learning Representations
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider using gradient descent to minimize the nonconvex function $$f(X)=\phi(XX^{T})$$ over an $$n\times r$$ factor matrix $$X$$, in which $$\phi$$ is an underlying smooth convex cost function defined over $$n\times n$$ matrices. While only a second-order stationary point $$X$$ can be provably found in reasonable time, if $$X$$ is additionally \emph{rank deficient}, then its rank deficiency certifies it as being globally optimal. This way of certifying global optimality necessarily requires the search rank $$r$$ of the current iterate $$X$$ to be \emph{overparameterized} with respect to the rank $$r^{\star}$ of the global minimizer $$X^{\star}$$. Unfortunately, overparameterization significantly slows down the convergence of gradient descent, from a linear rate with $$r=r^{\star}$$ to a sublinear rate when $$r>r^{\star}$$, even when $$\phi$$ is strongly convex. In this paper, we propose an inexpensive preconditioner that restores the convergence rate of gradient descent back to linear in the overparameterized case, while also making it agnostic to possible ill-conditioning in the global minimizer $$X^{\star}$$. 
    more » « less
  2. In current software applications, numerous vulnerabilities may be present. Attackers attempt to exploit these vulnerabilities, leading to security breaches, unauthorized entry, data theft, or the incapacitation of computer systems. Instead of addressing software or hardware vulnerabilities at a later stage, it is better to address them immediately or during the development phase. Tools such as AIBugHunter provide solutions designed to tackle software issues by predicting, categorizing, and fixing coding vulnerabilities. Essentially, developers can see where their code is susceptible to attacks and obtain details about the nature and severity of these vulnerabilities. AIBugHunter incorporates VulRepair to detect and repair vulnerabilities. VulRepair currently predicts patches for vulnerable functions at 44%. To be truly effective, this number needs to be increased. This study examines VulRepair to see whether the 44% perfect prediction can be increased. VulRepair is based on T5 and uses both natural language and programming languages during its pretraining phase, along with byte pair encoding. T5 is a text-to-text transfer transformer model with an encoder and decoder as part of its neural network. It outperforms other models such as VRepair and CodeBERT. However, the hyperparameters may not be optimized due to the development of new optimizers. We reviewed a deep neural network (DNN) optimizer developed by Google in 2023. This optimizer, the Evolved Sign Momentum (LION), is available in PyTorch. We applied LION to VulRepair and tested its influence on the hyperparameters. After adjusting the hyperparameters, we obtained a 56% perfect prediction, which exceeds the value of the VulRepair report of 44%. This means that VulRepair can repair more vulnerabilities and avoid more attacks. As far as we know, our approach utilizing an alternative to AdamW, the standard optimizer, has not been previously applied to enhance VulRepair and similar models. 
    more » « less
  3. This work considers the minimization of a general convex function f (X) over the cone of positive semi-definite matrices whose optimal solution X* is of low-rank. Standard first-order convex solvers require performing an eigenvalue decomposition in each iteration, severely limiting their scalability. A natural nonconvex reformulation of the problem factors the variable X into the product of a rectangular matrix with fewer columns and its transpose. For a special class of matrix sensing and completion problems with quadratic objective functions, local search algorithms applied to the factored problem have been shown to be much more efficient and, in spite of being nonconvex, to converge to the global optimum. The purpose of this work is to extend this line of study to general convex objective functions f (X) and investigate the geometry of the resulting factored formulations. Specifically, we prove that when f (X) satisfies the restricted well-conditioned assumption, each critical point of the factored problem either corresponds to the optimal solution X* or a strict saddle where the Hessian matrix has a strictly negative eigenvalue. Such a geometric structure of the factored formulation ensures that many local search algorithms can converge to the global optimum with random initializations. 
    more » « less
  4. Let $$\phi(x,y)$$ be a continuous function, smooth away from the diagonal, such that, for some $$\alpha>0$$, the associated generalized Radon transforms \begin{equation} \label{Radon} R_t^{\phi}f(x)=\int_{\phi(x,y)=t} f(y) \psi(y) d\sigma_{x,t}(y) \end{equation} map $$L^2({\mathbb R}^d) \to H^{\alpha}({\mathbb R}^d)$$ for all $t>0$. Let $$E$$ be a compact subset of $${\mathbb R}^d$$ for some $$d \ge 2$$, and suppose that the Hausdorff dimension of $$E$$ is $$>d-\alpha$$. We show that any tree graph $$T$$ on $k+1$ ($$k \ge 1$$) vertices is realizable in $$E$$, in the sense that there exist distinct $$x^1, x^2, \dots, x^{k+1} \in E$$ and $t>0$ such that the $$\phi$$-distance $$\phi(x^i, x^j)$$ is equal to $$t$$ for all pairs $(i,j)$ corresponding to the edges of the graph $$T$$. 
    more » « less
  5. Differential privacy is a mathematical framework for developing statistical computations with provable guarantees of privacy and accuracy. In contrast to the privacy component of differential privacy, which has a clear mathematical and intuitive meaning, the accuracy component of differential privacy does not have a generally accepted definition; accuracy claims of differential privacy algorithms vary from algorithm to algorithm and are not instantiations of a general definition. We identify program discontinuity as a common theme in existingad hocdefinitions and introduce an alternative notion of accuracy parametrized by, what we call, — the of an inputxw.r.t.  a deterministic computationfand a distanced, is the minimal distanced(x,y) over allysuch thatf(y)≠f(x). We show that our notion of accuracy subsumes the definition used in theoretical computer science, and captures known accuracy claims for differential privacy algorithms. In fact, our general notion of accuracy helps us prove better claims in some cases. Next, we study the decidability of accuracy. We first show that accuracy is in general undecidable. Then, we define a non-trivial class of probabilistic computations for which accuracy is decidable (unconditionally, or assuming Schanuel’s conjecture). We implement our decision procedure and experimentally evaluate the effectiveness of our approach for generating proofs or counterexamples of accuracy for common algorithms from the literature. 
    more » « less