We develop a projected Nesterov’s proximalgradient (PNPG) scheme for reconstructing sparse signals from compressive Poissondistributed measurements with the mean signal intensity that follows an affine model with known intercept. The objective function to be minimized is a sum of convex data fidelity (negative loglikelihood (NLL)) and regularization terms. We apply sparse signal regularization where the signal belongs to a nonempty closed convex set within the domain of the NLL and signal sparsity is imposed using totalvariation (TV) penalty. We present analytical upper bounds on the regularization tuning constant. The proposed PNPG method employs projected Nesterov’s acceleration step, function restart, and an adaptive stepsize selection scheme that aims at obtaining a good local majorizing function of the NLL and reducing the time spent backtracking. We establish O(k⁻²) convergence of the PNPG method with stepsize backtracking only and no restart. Numerical examples demonstrate the performance of the PNPG method.
more »
« less
Upperbounding the regularization constant for convex sparse signal reconstruction
Consider reconstructing a signal x by minimizing a weighted sum of a convex differentiable negative loglikelihood (NLL) (datafidelity) term and a convex regularization term that imposes a convexset constraint on x and enforces its sparsity using ℓ1norm analysis regularization.We compute upper bounds on the regularization tuning constant beyond which the regularization term overwhelmingly dominates the NLL term so that the set of minimum points of the objective function does not change. Necessary and sufficient conditions for irrelevance of sparse signal regularization and a condition for the existence of finite
upper bounds are established. We formulate an optimization problem for finding these bounds when the regularization term can be globally minimized by a feasible x and also develop an alternating direction method of multipliers (ADMM) type method for their computation. Simulation examples show that the derived and empirical bounds match.
more »
« less
 Award ID(s):
 1421480
 NSFPAR ID:
 10023710
 Date Published:
 Journal Name:
 arXiv.org
 ISSN:
 23318422
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this


We develop a projected Nesterov’s proximalgradient (PNPG) approach for sparse signal reconstruction that combines adaptive step size with Nesterov’s momentum acceleration. The objective function that we wish to minimize is the sum of a convex differentiable datafidelity (negative loglikelihood (NLL)) term and a convex regularization term. We apply sparse signal regularization where the signal belongs to a closed convex set within the closure of the domain of the NLL; the convexset constraint facilitates flexible NLL domains and accurate signal recovery. Signal sparsity is imposed using the ℓ₁norm penalty on the signal’s linear transform coefficients. The PNPG approach employs a projected Nesterov’s acceleration step with restart and a dualitybased inner iteration to compute the proximal mapping. We propose an adaptive stepsize selection scheme to obtain a good local majorizing function of the NLL and reduce the time spent backtracking. Thanks to stepsize adaptation, PNPG converges faster than the methods that do not adjust to the local curvature of the NLL. We present an integrated derivation of the momentum acceleration and proofs of O(k⁻²) objective function convergence rate and convergence of the iterates, which account for adaptive step size, inexactness of the iterative proximal mapping, and the convexset constraint. The tuning of PNPG is largely application independent. Tomographic and compressedsensing reconstruction experiments with Poisson generalized linear and Gaussian linear measurement models demonstrate the performance of the proposed approach.more » « less

null (Ed.)Abstract We study a 1dimensional discrete signal denoising problem that consists of minimizing a sum of separable convex fidelity terms and convex regularization terms, the latter penalize the differences of adjacent signal values. This problem generalizes the total variation regularization problem. We provide here a unified approach to solve the problem for general convex fidelity and regularization functions that is based on the Karush–Kuhn–Tucker optimality conditions. This approach is shown here to lead to a fast algorithm for the problem with general convex fidelity and regularization functions, and a faster algorithm if, in addition, the fidelity functions are differentiable and the regularization functions are strictly convex. Both algorithms achieve the best theoretical worst case complexity over existing algorithms for the classes of objective functions studied here. Also in practice, our C++ implementation of the method is considerably faster than popular C++ nonlinear optimization solvers for the problem.more » « less

We develop a sparse image reconstruction method for Poissondistributed polychromatic Xray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our massattenuation spectrum parameterization of the noiseless measurements for singlematerial objects and express the massattenuation spectrum as a linear combination of Bspline basis functions of order one. A block coordinatedescent algorithm is developed for constrained minimization of a penalized Poisson negative loglikelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the densitymap image; the image sparsity is imposed using a convex totalvariation (TV) norm penalty term. This algorithm alternates between a Nesterov’s proximalgradient (NPG) step for estimating the densitymap image and a limitedmemory BroydenFletcherGoldfarbShanno with box constraints (LBFGSB) step for estimating the incidentspectrum parameters. We establish conditions for biconvexity of the penalized NLL objective function, which, if satisfied, ensures monotonicity of the NPGBFGS iteration. We also show that the penalized NLL objective satisfies the KurdykaŁojasiewicz property, which is important for establishing local convergence of blockcoordinate descent schemes in biconvex optimization problems. Simulation examples demonstrate the performance of the proposed scheme.more » « less

We give nearly matching upper and lower bounds on the oracle complexity of finding ϵstationary points (∥∇F(x)∥≤ϵ in stochastic convex optimization. We jointly analyze the oracle complexity in both the local stochastic oracle model and the global oracle (or, statistical learning) model. This allows us to decompose the complexity of finding nearstationary points into optimization complexity and sample complexity, and reveals some surprising differences between the complexity of stochastic optimization versus learning. Notably, we show that in the global oracle/statistical learning model, only logarithmic dependence on smoothness is required to find a nearstationary point, whereas polynomial dependence on smoothness is necessary in the local stochastic oracle model. In other words, the separation in complexity between the two models can be exponential, and the folklore understanding that smoothness is required to find stationary points is only weakly true for statistical learning. Our upper bounds are based on extensions of a recent “recursive regularization” technique proposed by AllenZhu (2018). We show how to extend the technique to achieve nearoptimal rates, and in particular show how to leverage the extra information available in the global oracle model. Our algorithm for the global model can be implemented efficiently through finite sum methods, and suggests an interesting new computationalstatistical tradeoffmore » « less