We develop a projected Nesterov’s proximal-gradient (PNPG) scheme for reconstructing sparse signals from compressive Poisson-distributed measurements with the mean signal intensity that follows an affine model with known intercept. The objective function to be minimized is a sum of convex data fidelity (negative log-likelihood (NLL)) and regularization terms. We apply sparse signal regularization where the signal belongs to a nonempty closed convex set within the domain of the NLL and signal sparsity is imposed using total-variation (TV) penalty. We present analytical upper bounds on the regularization tuning constant. The proposed PNPG method employs projected Nesterov’s acceleration step, function restart, and an adaptive step-size selection scheme that aims at obtaining a good local majorizing function of the NLL and reducing the time spent backtracking. We establish O(k⁻²) convergence of the PNPG method with step-size backtracking only and no restart. Numerical examples demonstrate the performance of the PNPG method.
more »
« less
Upper-bounding the regularization constant for convex sparse signal reconstruction
Consider reconstructing a signal x by minimizing a weighted sum of a convex differentiable negative log-likelihood (NLL) (data-fidelity) term and a convex regularization term that imposes a convex-set constraint on x and enforces its sparsity using ℓ1-norm analysis regularization.We compute upper bounds on the regularization tuning constant beyond which the regularization term overwhelmingly dominates the NLL term so that the set of minimum points of the objective function does not change. Necessary and sufficient conditions for irrelevance of sparse signal regularization and a condition for the existence of finite
upper bounds are established. We formulate an optimization problem for finding these bounds when the regularization term can be globally minimized by a feasible x and also develop an alternating direction method of multipliers (ADMM) type method for their computation. Simulation examples show that the derived and empirical bounds match.
more »
« less
- Award ID(s):
- 1421480
- PAR ID:
- 10023710
- Date Published:
- Journal Name:
- arXiv.org
- ISSN:
- 2331-8422
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We develop a projected Nesterov’s proximal-gradient (PNPG) approach for sparse signal reconstruction that combines adaptive step size with Nesterov’s momentum acceleration. The objective function that we wish to minimize is the sum of a convex differentiable data-fidelity (negative log-likelihood (NLL)) term and a convex regularization term. We apply sparse signal regularization where the signal belongs to a closed convex set within the closure of the domain of the NLL; the convex-set constraint facilitates flexible NLL domains and accurate signal recovery. Signal sparsity is imposed using the ℓ₁-norm penalty on the signal’s linear transform coefficients. The PNPG approach employs a projected Nesterov’s acceleration step with restart and a duality-based inner iteration to compute the proximal mapping. We propose an adaptive step-size selection scheme to obtain a good local majorizing function of the NLL and reduce the time spent backtracking. Thanks to step-size adaptation, PNPG converges faster than the methods that do not adjust to the local curvature of the NLL. We present an integrated derivation of the momentum acceleration and proofs of O(k⁻²) objective function convergence rate and convergence of the iterates, which account for adaptive step size, inexactness of the iterative proximal mapping, and the convex-set constraint. The tuning of PNPG is largely application independent. Tomographic and compressed-sensing reconstruction experiments with Poisson generalized linear and Gaussian linear measurement models demonstrate the performance of the proposed approach.more » « less
-
null (Ed.)Abstract We study a 1-dimensional discrete signal denoising problem that consists of minimizing a sum of separable convex fidelity terms and convex regularization terms, the latter penalize the differences of adjacent signal values. This problem generalizes the total variation regularization problem. We provide here a unified approach to solve the problem for general convex fidelity and regularization functions that is based on the Karush–Kuhn–Tucker optimality conditions. This approach is shown here to lead to a fast algorithm for the problem with general convex fidelity and regularization functions, and a faster algorithm if, in addition, the fidelity functions are differentiable and the regularization functions are strictly convex. Both algorithms achieve the best theoretical worst case complexity over existing algorithms for the classes of objective functions studied here. Also in practice, our C++ implementation of the method is considerably faster than popular C++ nonlinear optimization solvers for the problem.more » « less
-
We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements for single-material objects and express the mass-attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density-map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step for estimating the density-map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. We establish conditions for biconvexity of the penalized NLL objective function, which, if satisfied, ensures monotonicity of the NPG-BFGS iteration. We also show that the penalized NLL objective satisfies the Kurdyka-Łojasiewicz property, which is important for establishing local convergence of block-coordinate descent schemes in biconvex optimization problems. Simulation examples demonstrate the performance of the proposed scheme.more » « less
-
We give nearly matching upper and lower bounds on the oracle complexity of finding ϵ-stationary points (∥∇F(x)∥≤ϵ in stochastic convex optimization. We jointly analyze the oracle complexity in both the local stochastic oracle model and the global oracle (or, statistical learning) model. This allows us to decompose the complexity of finding near-stationary points into optimization complexity and sample complexity, and reveals some surprising differences between the complexity of stochastic optimization versus learning. Notably, we show that in the global oracle/statistical learning model, only logarithmic dependence on smoothness is required to find a near-stationary point, whereas polynomial dependence on smoothness is necessary in the local stochastic oracle model. In other words, the separation in complexity between the two models can be exponential, and the folklore understanding that smoothness is required to find stationary points is only weakly true for statistical learning. Our upper bounds are based on extensions of a recent “recursive regularization” technique proposed by Allen-Zhu (2018). We show how to extend the technique to achieve near-optimal rates, and in particular show how to leverage the extra information available in the global oracle model. Our algorithm for the global model can be implemented efficiently through finite sum methods, and suggests an interesting new computational-statistical tradeoffmore » « less