We develop a projected Nesterov’s proximal-gradient (PNPG) scheme for reconstructing sparse signals from compressive Poisson-distributed measurements with the mean signal intensity that follows an affine model with known intercept. The objective function to be minimized is a sum of convex data fidelity (negative log-likelihood (NLL)) and regularization terms. We apply sparse signal regularization where the signal belongs to a nonempty closed convex set within the domain of the NLL and signal sparsity is imposed using total-variation (TV) penalty. We present analytical upper bounds on the regularization tuning constant. The proposed PNPG method employs projected Nesterov’s acceleration step, function restart, and an adaptive step-size selection scheme that aims at obtaining a good local majorizing function of the NLL and reducing the time spent backtracking. We establish O(k⁻²) convergence of the PNPG method with step-size backtracking only and no restart. Numerical examples demonstrate the performance of the PNPG method. 
                        more » 
                        « less   
                    
                            
                            Upper-bounding the regularization constant for convex sparse signal reconstruction
                        
                    
    
            Consider reconstructing a signal x by minimizing a weighted sum of a convex differentiable negative log-likelihood (NLL) (data-fidelity) term and a convex regularization term that imposes a convex-set constraint on x and enforces its sparsity using ℓ1-norm analysis regularization.We compute upper bounds on the regularization tuning constant beyond which the regularization term overwhelmingly dominates the NLL term so that the set of minimum points of the objective function does not change. Necessary and sufficient conditions for irrelevance of sparse signal regularization and a condition for the existence of finite upper bounds are established. We formulate an optimization problem for finding these bounds when the regularization term can be globally minimized by a feasible x and also develop an alternating direction method of multipliers (ADMM) type method for their computation. Simulation examples show that the derived and empirical bounds match. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1421480
- PAR ID:
- 10023710
- Date Published:
- Journal Name:
- arXiv.org
- ISSN:
- 2331-8422
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            We develop a projected Nesterov’s proximal-gradient (PNPG) approach for sparse signal reconstruction that combines adaptive step size with Nesterov’s momentum acceleration. The objective function that we wish to minimize is the sum of a convex differentiable data-fidelity (negative log-likelihood (NLL)) term and a convex regularization term. We apply sparse signal regularization where the signal belongs to a closed convex set within the closure of the domain of the NLL; the convex-set constraint facilitates flexible NLL domains and accurate signal recovery. Signal sparsity is imposed using the ℓ₁-norm penalty on the signal’s linear transform coefficients. The PNPG approach employs a projected Nesterov’s acceleration step with restart and a duality-based inner iteration to compute the proximal mapping. We propose an adaptive step-size selection scheme to obtain a good local majorizing function of the NLL and reduce the time spent backtracking. Thanks to step-size adaptation, PNPG converges faster than the methods that do not adjust to the local curvature of the NLL. We present an integrated derivation of the momentum acceleration and proofs of O(k⁻²) objective function convergence rate and convergence of the iterates, which account for adaptive step size, inexactness of the iterative proximal mapping, and the convex-set constraint. The tuning of PNPG is largely application independent. Tomographic and compressed-sensing reconstruction experiments with Poisson generalized linear and Gaussian linear measurement models demonstrate the performance of the proposed approach.more » « less
- 
            Linear representation learning is widely studied due to its conceptual simplicity and empirical utility in tasks such as compression, classification, and feature extraction. Given a set of points $$[\x_1, \x_2, \ldots, \x_n] = \X \in \R^{d \times n}$$ and a vector $$\y \in \R^d$$, the goal is to find coefficients $$\w \in \R^n$$ so that $$\X \w \approx \y$$, subject to some desired structure on $$\w$$. In this work we seek $$\w$$ that forms a local reconstruction of $$\y$$ by solving a regularized least squares regression problem. We obtain local solutions through a locality function that promotes the use of columns of $$\X$$ that are close to $$\y$$ when used as a regularization term. We prove that, for all levels of regularization and under a mild condition that the columns of $$\X$$ have a unique Delaunay triangulation, the optimal coefficients' number of non-zero entries is upper bounded by $d+1$, thereby providing local sparse solutions when $$d \ll n$$. Under the same condition we also show that for any $$\y$$ contained in the convex hull of $$\X$$ there exists a regime of regularization parameter such that the optimal coefficients are supported on the vertices of the Delaunay simplex containing $$\y$$. This provides an interpretation of the sparsity as having structure obtained implicitly from the Delaunay triangulation of $$\X$$. We demonstrate that our locality regularized problem can be solved in comparable time to other methods that identify the containing Delaunay simplex.more » « less
- 
            We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements for single-material objects and express the mass-attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density-map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step for estimating the density-map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. We establish conditions for biconvexity of the penalized NLL objective function, which, if satisfied, ensures monotonicity of the NPG-BFGS iteration. We also show that the penalized NLL objective satisfies the Kurdyka-Łojasiewicz property, which is important for establishing local convergence of block-coordinate descent schemes in biconvex optimization problems. Simulation examples demonstrate the performance of the proposed scheme.more » « less
- 
            null (Ed.)Abstract We study a 1-dimensional discrete signal denoising problem that consists of minimizing a sum of separable convex fidelity terms and convex regularization terms, the latter penalize the differences of adjacent signal values. This problem generalizes the total variation regularization problem. We provide here a unified approach to solve the problem for general convex fidelity and regularization functions that is based on the Karush–Kuhn–Tucker optimality conditions. This approach is shown here to lead to a fast algorithm for the problem with general convex fidelity and regularization functions, and a faster algorithm if, in addition, the fidelity functions are differentiable and the regularization functions are strictly convex. Both algorithms achieve the best theoretical worst case complexity over existing algorithms for the classes of objective functions studied here. Also in practice, our C++ implementation of the method is considerably faster than popular C++ nonlinear optimization solvers for the problem.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    