On the Linear Convergence of Forward–Backward Splitting Method: Part I—Convergence Analysis
- Award ID(s):
- 1816449
- PAR ID:
- 10309056
- Date Published:
- Journal Name:
- Journal of Optimization Theory and Applications
- Volume:
- 188
- Issue:
- 2
- ISSN:
- 0022-3239
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Science is increasingly a collaborative pursuit. Although the modern scientific enterprise owes much to individuals working at the core of their field, humanity is increasingly confronted by highly complex problems that require the integration of a variety of disciplinary and methodological expertise. In 2016, the U.S. National Science Foundation launched an initiative prioritizing support for convergence research as a means of “solving vexing research problems, in particular, complex problems focusing on societal needs.” We discuss our understanding of the objectives of convergence research and describe in detail the conditions and processes likely to generate successful convergence research. We use our recent experience as participants in a convergence workshop series focused on resilience in the Arctic to highlight key points. The emergence of resilience science over the past 50 years is presented as a successful contemporary example of the emergence of convergence. We close by describing some of the challenges to the development of convergence research, such as timescales and discounting the future, appropriate metrics of success, allocation issues, and funding agency requirements.more » « less
-
Abstract Consider an analytic Hamiltonian system near its analytic invariant torus $$\mathcal T_0$$ carrying zero frequency. We assume that the Birkhoff normal form of the Hamiltonian at $$\mathcal T_0$$ is convergent and has a particular form: it is an analytic function of its non-degenerate quadratic part. We prove that in this case there is an analytic canonical transformation—not just a formal power series—bringing the Hamiltonian into its Birkhoff normal form.more » « less
-
Theoretical Findings Validate Historical Data Reuse for Improved Policy Optimization A new study, “Reusing Historical Trajectories in Natural Policy Gradient via Importance Sampling: Convergence and Convergence Rate” by Yifan Lin, Yuhao Wang, and Enlu Zhou, explores an advanced approach to reinforcement learning. The research focuses on improving policy optimization by reusing historical trajectories through importance sampling in natural policy gradient methods. The authors rigorously analyze the convergence properties of this approach and demonstrate that reusing past data enhances convergence rates while maintaining theoretical guarantees. Their findings have practical implications for applications where data collection is costly or limited, such as robotics and autonomous systems. By integrating these insights into policy optimization frameworks, the study provides a valuable contribution to the field of reinforcement learning.more » « less
-
We explore convergence of deep neural networks with the popular ReLU activation function, as the depth of the networks tends to infinity. To this end, we introduce the notion of activation domains and activation matrices of a ReLU network. By replacing applications of the ReLU activation function by multiplications with activation matrices on activation domains, we obtain an explicit expression of the ReLU network. We then identify the convergence of the ReLU networks as convergence of a class of infinite products of matrices. Sufficient and necessary conditions for convergence of these infinite products of matrices are studied. As a result, we establish necessary conditions for ReLU networks to converge that the sequence of weight matrices converges to the identity matrix and the sequence of the bias vectors converges to zero as the depth of ReLU networks increases to infinity. Moreover, we obtain sufficient conditions in terms of the weight matrices and bias vectors at hidden layers for pointwise convergence of deep ReLU networks. These results provide mathematical insights to convergence of deep neural networks. Experiments are conducted to mathematically verify the results and to illustrate their potential usefulness in initialization of deep neural networks.more » « less
An official website of the United States government

