We propose a continuous-time second-order optimization algorithm for solving unconstrained convex optimization problems with bounded Hessian. We show that this alternative algorithm has a comparable convergence rate to that of the continuous-time Newton–Raphson method, however structurally, it is amenable to a more efficient distributed implementation. We present a distributed implementation of our proposed optimization algorithm and prove its convergence via Lyapunov analysis. A numerical example demonstrates our results. 
                        more » 
                        « less   
                    
                            
                            A note on local convergence of iterative processes for pipe network analysis
                        
                    
    
            Abstract Analysis of pipe networks involves computing flow rates and pressure differences on pipe segments in the network, given the external inflow/outflow values. This analysis can be conducted using iterative methods, among which the algorithms of Hardy Cross and Newton-Raphson have historically been applied in practice. In this note, we address the mathematical analysis of the local convergence of these algorithms. The loop-based Newton–Raphson algorithm converges quadratically fast, and we provide estimates for its convergence radius to correct some estimates in the previous literature. In contrast, we show that the convergence of the Hardy Cross algorithm is only linear. This provides theoretical confirmation of experimental observations reported earlier in the literature. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2153723
- PAR ID:
- 10581122
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Computational and Applied Mathematics
- Volume:
- 44
- Issue:
- 5
- ISSN:
- 2238-3603
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)Stochastic approximation (SA) algorithms are recursive techniques used to obtain the roots of functions that can be expressed as expectations of a noisy parameterized family of functions. In this paper two new SA algorithms are introduced: 1) PolSA, an extension of Polyak's momentum technique with a specially designed matrix momentum, and 2) NeSA, which can either be regarded as a variant of Nesterov's acceleration method, or a simplification of PolSA. The rates of convergence of SA algorithms is well understood. Under special conditions, the mean square error of the parameter estimates is bounded by σ 2 /n+o(1/n), where σ 2 ≥ 0 is an identifiable constant. If these conditions fail, the rate is typically sub-linear. There are two well known SA algorithms that ensure a linear rate, with minimal value of variance, σ 2 : the Ruppert-Polyak averaging technique, and the stochastic Newton-Raphson (SNR) algorithm. It is demonstrated here that under mild technical assumptions, the PolSA algorithm also achieves this optimality criteria. This result is established via novel coupling arguments: It is shown that the parameter estimates obtained from the PolSA algorithm couple with those of the optimal variance (but computationally more expensive) SNR algorithm, at a rate O(1/n 2 ). The newly proposed algorithms are extended to a reinforcement learning setting to obtain new Q-learning algorithms, and numerical results confirm the coupling of PolSA and SNR.more » « less
- 
            Summary This paper presents a control technique for output tracking of reference signals in continuous‐time dynamical systems. The technique is comprised of the following three elements: (i) a fluid‐flow version of the Newton–Raphson method for solving algebraic equations, (ii) a system‐output prediction which has to track the future reference signal, and (iii) a speedup of the control action for enhancing the tracker's accuracy and, in some cases, stabilizing the closed‐loop system. The technique can be suitable for linear and nonlinear systems, implementable by simple algorithms, and can track reference points as well as time‐dependent reference signals. Though inherently local, the tracking controller is proven to have a global convergence for a class of linear systems. The derived theoretical results of the paper include convergence of the tracking controller and error analysis, and are supported by illustrative simulation and laboratory experiments.more » « less
- 
            Abstract In this paper, we study and prove the non-asymptotic superlinear convergence rate of the Broyden class of quasi-Newton algorithms which includes the Davidon–Fletcher–Powell (DFP) method and the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method. The asymptotic superlinear convergence rate of these quasi-Newton methods has been extensively studied in the literature, but their explicit finite–time local convergence rate is not fully investigated. In this paper, we provide a finite–time (non-asymptotic) convergence analysis for Broyden quasi-Newton algorithms under the assumptions that the objective function is strongly convex, its gradient is Lipschitz continuous, and its Hessian is Lipschitz continuous at the optimal solution. We show that in a local neighborhood of the optimal solution, the iterates generated by both DFP and BFGS converge to the optimal solution at a superlinear rate of$$(1/k)^{k/2}$$ , wherekis the number of iterations. We also prove a similar local superlinear convergence result holds for the case that the objective function is self-concordant. Numerical experiments on several datasets confirm our explicit convergence rate bounds. Our theoretical guarantee is one of the first results that provide a non-asymptotic superlinear convergence rate for quasi-Newton methods.more » « less
- 
            Quasi-Newton algorithms are among the most popular iterative methods for solving unconstrained minimization problems, largely due to their favorable superlinear convergence property. However, existing results for these algorithms are limited as they provide either (i) a global convergence guarantee with an asymptotic superlinear convergence rate, or (ii) a local non-asymptotic superlinear rate for the case that the initial point and the initial Hessian approximation are chosen properly. In particular, no current analysis for quasi-Newton methods guarantees global convergence with an explicit superlinear convergence rate. In this paper, we close this gap and present the first globally convergent quasi-Newton method with an explicit non asymptotic superlinear convergence rate. Unlike classical quasi-Newton methods, we build our algorithm upon the hybrid proximal extragradient method and propose a novel online learning framework for updating the Hessian approximation matrices. Specifically, guided by the convergence analysis, we formulate the Hessian approximation update as an online convex optimization problem in the space of matrices, and we relate the bounded regret of the online problem to the superlinear convergence of our method.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
