Phase retrieval deals with the recovery of complex-or real-valued signals from magnitude measurements. As shown recently, the method PhaseMax enables phase retrieval via convex optimization and without lifting the problem to a higher dimension. To succeed, PhaseMax requires an initial guess of the solution, which can be calculated via spectral initializers. In this paper, we show that with the availability of an initial guess, phase retrieval can be carried out with an ever simpler, linear procedure. Our algorithm, called PhaseLin, is the linear estimator that minimizes the mean squared error (MSE) when applied to the magnitude measurements. The linear nature of PhaseLin enables an exact and nonasymptotic MSE analysis for arbitrary measurement matrices. We furthermore demonstrate that by iteratively using PhaseLin, one arrives at an efficient phase retrieval algorithm that performs on par with existing convex and nonconvex methods on synthetic and real-world data.
more »
« less
Convex Phase Retrieval without Lifting via PhaseMax
Semidefinite relaxation methods transform a variety of non-convex optimization problems into convex problems, but square the number of variables. We study a new type of convex relaxation for phase retrieval problems, called PhaseMax, that convexifies the underlying problem without lifting. The resulting problem formulation can be solved using standard convex optimization routines, while still working in the original, low-dimensional variable space. We prove, using a random spherical distribution measurement model, that PhaseMax succeeds with high probability for a sufficiently large number of measurements. We compare our approach to other phase retrieval methods and demonstrate that our theory accurately predicts the success of PhaseMax.
more »
« less
- Award ID(s):
- 1652065
- PAR ID:
- 10049097
- Date Published:
- Journal Name:
- Proceedings of the 34th International Conference on Machine Learning (ICML)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Holography has demonstrated potential to achieve a wide field of view, focus supporting, optical see-through augmented reality display in an eyeglasses form factor. Although phase modulating spatial light modulators are becoming available, the phase-only hologram generation algorithms are still imprecise resulting in severe artifacts in the reconstructed imagery. Since the holographic phase retrieval problem is non-linear and non-convex and computationally expensive with the solutions being non-unique, the existing methods make several assumptions to make the phase-only hologram computation tractable. In this work, we deviate from any such approximations and solve the holographic phase retrieval problem as a quadratic problem using complex Wirtinger gradients and standard first-order optimization methods. Our approach results in high-quality phase hologram generation with at least an order of magnitude improvement over existing state-of-the-art approaches.more » « less
-
null (Ed.)Sub-hourly Unit Commitment (UC) problems have been suggested as a way to improve power system efficiency. Such problems, however, are much more difficult than hourly UC problems. This is not just because of the increased number of period to consider, but also because of much reduced unit ramping capabilities leading to more complicated convex hulls. As a result, state-of-the-art and practice methods such as branch-and-cut suffer from poor performance. In this paper, our recent Surrogate Absolute-Value Lagrangian Relaxation (SAVLR) method, which overcame major difficulties of standard Lagrangian Relaxation, is enhanced by synergistically incorporating the concept of Ordinal Optimization (OO). By using OO, solving subproblems becomes much faster. Testing of Midcontinent ISO (MISO)’s problem with 15 minutes as the time interval over 36 hours involving about 1,100 units and 15000 virtuals demonstrates that the new method obtains near-optimal solutions efficiently and significantly outperforms branch-and-cut.more » « less
-
Abstract Advances in compressive sensing (CS) provided reconstruction algorithms of sparse signals from linear measurements with optimal sample complexity, but natural extensions of this methodology to nonlinear inverse problems have been met with potentially fundamental sample complexity bottlenecks. In particular, tractable algorithms for compressive phase retrieval with sparsity priors have not been able to achieve optimal sample complexity. This has created an open problem in compressive phase retrieval: under generic, phaseless linear measurements, are there tractable reconstruction algorithms that succeed with optimal sample complexity? Meanwhile, progress in machine learning has led to the development of new data‐driven signal priors in the form of generative models, which can outperform sparsity priors with significantly fewer measurements. In this work, we resolve the open problem in compressive phase retrieval and demonstrate that generative priors can lead to a fundamental advance by permitting optimal sample complexity by a tractable algorithm. We additionally provide empirics showing that exploiting generative priors in phase retrieval can significantly outperform sparsity priors. These results provide support for generative priors as a new paradigm for signal recovery in a variety of contexts, both empirically and theoretically. The strengths of this paradigm are that (1) generative priors can represent some classes of natural signals more concisely than sparsity priors, (2) generative priors allow for direct optimization over the natural signal manifold, which is intractable under sparsity priors, and (3) the resulting non‐convex optimization problems with generative priors can admit benign optimization landscapes at optimal sample complexity, perhaps surprisingly, even in cases of nonlinear measurements.more » « less
-
We provide new gradient-based methods for efficiently solving a broad class of ill-conditioned optimization problems. We consider the problem of minimizing a function f : R d --> R which is implicitly decomposable as the sum of m unknown non-interacting smooth, strongly convex functions and provide a method which solves this problem with a number of gradient evaluations that scales (up to logarithmic factors) as the product of the square-root of the condition numbers of the components. This complexity bound (which we prove is nearly optimal) can improve almost exponentially on that of accelerated gradient methods, which grow as the square root of the condition number of f. Additionally, we provide efficient methods for solving stochastic, quadratic variants of this multiscale optimization problem. Rather than learn the decomposition of f (which would be prohibitively expensive), our methods apply a clean recursive “Big-Step-Little-Step” interleaving of standard methods. The resulting algorithms use O˜(dm) space, are numerically stable, and open the door to a more fine-grained understanding of the complexity of convex optimization beyond condition number.more » « less
An official website of the United States government

