skip to main content


Search for: All records

Award ID contains: 1907658

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The accuracy of many downstream machine learning algorithms is tied to the training data having uncorrelated features. With the modern-day data often being streaming in nature, geographically distributed, and having large dimensions, it is paramount to apply both uncorrelated feature learning and dimensionality reduction techniques in this scenario. Principal Component Analysis (PCA) is a state-of-the-art tool that simultaneously yields uncorrelated features and reduces data dimensions by projecting data onto the eigenvectors of the population covariance matrix. This paper introduces a novel algorithm called Consensus-DIstributEd Generalized Oja (C-DIEGO), which is based on Oja's method, to estimate the dominant eigenvector of a population covariance matrix in a distributed, streaming setting. The algorithm considers a distributed network of arbitrarily connected nodes without a central coordinator and assumes data samples continuously arrive at the individual nodes in a streaming manner. It is established in the paper that C-DIEGO can achieve an order-optimal convergence rate if nodes in the network are allowed to have enough consensus rounds per algorithmic iteration. Numerical results are also reported in the paper that showcase the efficacy of the proposed algorithm. 
    more » « less
  2. This paper considers the problem of understanding the exit time for trajectories of gradient-related first-order methods from saddle neighborhoods under some initial boundary conditions. Given the ‘flat’ geometry around saddle points, first-order methods can struggle to escape these regions in a fast manner due to the small magnitudes of gradients encountered. In particular, while it is known that gradient-related first-order methods escape strict-saddle neighborhoods, existing analytic techniques do not explicitly leverage the local geometry around saddle points in order to control behavior of gradient trajectories. It is in this context that this paper puts forth a rigorous geometric analysis of the gradient-descent method around strict-saddle neighborhoods using matrix perturbation theory. In doing so, it provides a key result that can be used to generate an approximate gradient trajectory for any given initial conditions. In addition, the analysis leads to a linear exit-time solution for gradient-descent method under certain necessary initial conditions, which explicitly bring out the dependence on problem dimension, conditioning of the saddle neighborhood, and more, for a class of strict-saddle functions. 
    more » « less
  3. null (Ed.)