skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Studying dynamics in two-dimensional quantum lattices using tree tensor network states
We analyze and discuss convergence properties of a numerically exactalgorithm tailored to study the dynamics of interacting two-dimensionallattice systems. The method is based on the application of the time-dependentvariational principle in a manifold of binary and quaternary TreeTensor Network States. The approach is found to be competitive withexisting matrix product state approaches. We discuss issues relatedto the convergence of the method, which could be relevant to a broaderset of numerical techniques used for the study of two-dimensionalsystems.  more » « less
Award ID(s):
1954791
PAR ID:
10223205
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
SciPost Physics
Volume:
9
Issue:
5
ISSN:
2542-4653
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Motivated by the fact that the gradient-based optimization algorithms can be studied from the perspective of limiting ordinary differential equations (ODEs), here we derive an ODE representation of the accelerated triple momentum (TM) algorithm. For unconstrained optimization problems with strongly convex cost, the TM algorithm has a proven faster convergence rate than the Nesterov's accelerated gradient (NAG) method but with the same computational complexity. We show that similar to the NAG method, in order to accurately capture the characteristics of the TM method, we need to use a high-resolution modeling to obtain the ODE representation of the TM algorithm. We propose a Lyapunov analysis to investigate the stability and convergence behavior of the proposed high-resolution ODE representation of the TM algorithm. We compare the rate of the ODE representation of the TM method with that of the NAG method to confirm its faster convergence. Our study also leads to a tighter bound on the worst rate of convergence for the ODE model of the NAG method. In this paper, we also discuss the use of the integral quadratic constraint (IQC) method to establish an estimate on the rate of convergence of the TM algorithm. A numerical example verifies our results. 
    more » « less
  2. We describe a novel meshless Galerkin method for numerically solving semilinear parabolic equations on spheres. The new approximation method is based upon a discretization in space using spherical basis functions in a Galerkin approximation. As our spatial approximation spaces are built with spherical basis functions, they can be of arbitrary order and do not require the construction of an underlying mesh. We will establish convergence of the meshless method by adapting, to the sphere, a convergence result due to Thom\'ee and Wahlbin. To do this requires proving new approximation results, including a novel inverse or Nikolskii inequality for spherical basis functions. We also discuss how the integrals in the Galerkin method can accurately and more efficiently be computed using a recently developed quadrature rule. These new quadrature formulas also apply to Galerkin approximations of elliptic partial differential equations on the sphere. Finally, we provide several numerical examples, including the Allen-Cahn for the sphere. 
    more » « less
  3. The theory of integral quadratic constraints (IQCs) allows the certification of exponential convergence of interconnected systems containing nonlinear or uncertain elements. In this work, we adapt the IQC theory to study first-order methods for smooth and strongly-monotone games and show how to design tailored quadratic constraints to get tight upper bounds of convergence rates. Using this framework, we recover the existing bound for the gradient method~(GD), derive sharper bounds for the proximal point method~(PPM) and optimistic gradient method~(OG), and provide for the first time a global convergence rate for the negative momentum method~(NM) with an iteration complexity O(κ1.5), which matches its known lower bound. In addition, for time-varying systems, we prove that the gradient method with optimal step size achieves the fastest provable worst-case convergence rate with quadratic Lyapunov functions. Finally, we further extend our analysis to stochastic games and study the impact of multiplicative noise on different algorithms. We show that it is impossible for an algorithm with one step of memory to achieve acceleration if it only queries the gradient once per batch (in contrast with the stochastic strongly-convex optimization setting, where such acceleration has been demonstrated). However, we exhibit an algorithm which achieves acceleration with two gradient queries per batch. 
    more » « less
  4. We show that the weak convergence of the Douglas--Rachford algorithm for finding a zero of the sum of two maximally monotone operators cannot be improved to strong convergence. Likewise, we show that strong convergence can fail for the method of partial inverses. 
    more » « less
  5. Generative adversarial networks (GANs) are powerful tools for learning generative models. In practice, the training may suffer from lack of convergence. GANs are commonly viewed as a two-player zero-sum game between two neural networks. Here, we leverage this game theoretic view to study the convergence behavior of the training process. Inspired by the fictitious play learning process, a novel training method, referred to as Fictitious GAN, is introduced. Fictitious GAN trains the deep neural networks using a mixture of historical models. Specifically, the discriminator (resp. generator) is updated according to the best-response to the mixture outputs from a sequence of previously trained generators (resp. discriminators). It is shown that Fictitious GAN can effectively resolve some convergence issues that cannot be resolved by the standard training approach. It is proved that asymptotically the average of the generator outputs has the same distribution as the data samples. 
    more » « less