skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Decentralized Low Rank Matrix Recovery from Column-Wise Projections by Alternating GD and Minimization
This work studies our recently developed algorithm, decentralized alternating projected gradient descent algorithm (Dec-AltGDmin), for recovering a low rank (LR) matrix from independent columnwise linear projections in a decentralized setting. This means that the observed data is spread across L agents and there is no central coordinating node. Since this problem is non-convex and since it involves a subspace recovery step, most existing literature from decentralized optimization is not useful. We demonstrate using extensive numerical simulations and communication, time, and sample complexity comparisons that (i) existing decentralized gradient descent (GD) approaches fail, and (ii) other common solution approaches on LR recovery literature – projected GD, alternating GD and alternating minimization (AltMin) – either have a higher communication (and time) complexity or a higher sample complexity. Communication complexity is often the most important concern in decentralized learning.  more » « less
Award ID(s):
2213069
PAR ID:
10617066
Author(s) / Creator(s):
;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
Collection itinéraires
ISSN:
2371-1906
ISBN:
979-8-3503-4485-1
Page Range / eLocation ID:
12936 to 12940
Format(s):
Medium: X
Location:
Seoul, Korea, Republic of
Sponsoring Org:
National Science Foundation
More Like this
  1. This work studies our recently developed decentralized algorithm, decentralized alternating projected gradient descent algorithm, called Dec-AltProjGDmin, for solving the following low-rank (LR) matrix recovery problem: recover an LR matrix from independent column-wise linear projections (LR column-wise Compressive Sensing). In recent work, we presented constructive convergence guarantees for Dec-AltProjGDmin under simple assumptions. By "constructive", we mean that the convergence time lower bound is provided for achieving any error level ε. However, our guarantee was stated for the equal neighbor consensus algorithm (at each iteration, each node computes the average of the data of all its neighbors) while most existing results do not assume the use of a specific consensus algorithm, but instead state guarantees in terms of the weights matrix eigenvalues. In order to compare with these results, we first modify our result to be in this form. Our second and main contribution is a theoretical and experimental comparison of our new result with the best existing one from the decentralized GD literature that also provides a convergence time bound for values of ε that are large enough. The existing guarantee is for a different problem setting and holds under different assumptions than ours and hence the comparison is not very clear cut. However, we are not aware of any other provably correct algorithms for decentralized LR matrix recovery in any other settings either. 
    more » « less
  2. This work considers two related learning problems in a federated attack-prone setting – federated principal com- ponents analysis (PCA) and federated low rank column-wise sensing (LRCS). The node attacks are assumed to be Byzan- tine which means that the attackers are omniscient and can collude. We introduce a novel provably Byzantine-resilient communication-efficient and sample-efficient algorithm, called Subspace-Median, that solves the PCA problem and is a key part of the solution for the LRCS problem. We also study the most natural Byzantine-resilient solution for federated PCA, a geometric median based modification of the federated power method, and explain why it is not useful. Our second main contribution is a complete alternating gradient descent (GD) and minimization (altGDmin) algorithm for Byzantine-resilient horizontally federated LRCS and sample and communication complexity guarantees for it. Extensive simulation experiments are used to corroborate our theoretical guarantees. The ideas that we develop for LRCS are easily extendable to other LR recovery problems as well. 
    more » « less
  3. In this work, we develop and analyze a novel Gradient Descent (GD) based solution, called Alternating GD and Minimization (AltGDmin), for efficiently solving the low rank matrix completion (LRMC) in a federated setting. Here “efficient” refers to communication-, computation- and sample- efficiency. LRMC involves recovering an n × q rank-r matrix X⋆ from a subset of its entries when r ≪ min(n, q). Our theoretical bounds on the sample complexity and iteration complexity of AltGDmin imply that it is the most communication-efficient solution while also been one of the most computation- and sample- efficient ones. We also extend our guarantee to the noisy LRMC setting. In addition, we show how our lemmas can be used to provide an improved sample complexity guarantee for the Alternating Minimization (AltMin) algorithm for LRMC. AltMin is one of the fastest centralized solutions for LRMC; with AltGDmin having comparable time cost even for the centralized setting. 
    more » « less
  4. This work develops a novel set of algorithms, al- ternating Gradient Descent (GD) and minimization for MRI (altGDmin-MRI1 and altGDmin-MRI2), for accelerated dynamic MRI by assuming an approximate low-rank (LR) model on the matrix formed by the vectorized images of the sequence. The LR model itself is well-known in the MRI literature; our contribution is the novel GD-based algorithms which are much faster, memory-efficient, and ‘general’ compared with existing work; and careful use of a 3-level hierarchical LR model. By ‘general’, we mean that, with a single choice of parameters, our method provides accurate reconstructions for multiple acceler- ated dynamic MRI applications, multiple sampling rates and sampling schemes. We show that our methods outperform many of the popular existing approaches while also being faster than all of them, on average. This claim is based on comparisons on 8 different retrospectively undersampled multi-coil dynamic MRI applications, sampled using either 1D Cartesian or 2D pseudo- radial undersampling, at multiple sampling rates. Evaluations on some prospectively undersampled datasets are also provided. Our second contribution is a mini-batch subspace tracking extension that can process new measurements and return reconstructions within a short delay after they arrive. The recovery algorithm itself is also faster than its batch counterpart. 
    more » « less
  5. This note provides a significantly simpler and shorter proof of our sample complexity guarantee for solving the low rank column-wise sensing problem using the Alternating Gradient Descent (GD) and Minimization (AltGDmin) algorithm. AltGDmin was developed and analyzed for solving this problem in our recent work. We also provide an improved guarantee. 
    more » « less