 Award ID(s):
 2208386
 NSFPAR ID:
 10420377
 Date Published:
 Journal Name:
 Inverse Problems
 Volume:
 39
 Issue:
 2
 ISSN:
 02665611
 Page Range / eLocation ID:
 025004
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

We consider a minimization problem whose objective function is the sum of a fidelity term, not necessarily convex, and a regularization term defined by a positive regularization parameter [Formula: see text] multiple of the [Formula: see text] norm composed with a linear transform. This problem has wide applications in compressed sensing, sparse machine learning and image reconstruction. The goal of this paper is to understand what choices of the regularization parameter can dictate the level of sparsity under the transform for a global minimizer of the resulting regularized objective function. This is a critical issue but it has been left unaddressed. We address it from a geometric viewpoint with which the sparsity partition of the image space of the transform is introduced. Choices of the regularization parameter are specified to ensure that a global minimizer of the corresponding regularized objective function achieves a prescribed level of sparsity under the transform. Results are obtained for the spacial sparsity case in which the transform is the identity map, a case that covers several applications of practical importance, including machine learning, image/signal processing and medical image reconstruction.more » « less

Abstract The goal of this study is to develop a new computed tomography (CT) image reconstruction method, aiming at improving the quality of the reconstructed images of existing methods while reducing computational costs. Existing CT reconstruction is modeled by pixelbased piecewise constant approximations of the integral equation that describes the CT projection data acquisition process. Using these approximations imposes a bottleneck model error and results in a discrete system of a large size. We propose to develop a contentadaptive unstructured grid (CAUG) based regularized CT reconstruction method to address these issues. Specifically, we design a CAUG of the image domain to sparsely represent the underlying image, and introduce a CAUGbased piecewise linear approximation of the integral equation by employing a collocation method. We further apply a regularization defined on the CAUG for the resulting illposed linear system, which may lead to a sparse linear representation for the underlying solution. The regularized CT reconstruction is formulated as a convex optimization problem, whose objective function consists of a weighted least square norm based fidelity term, a regularization term and a constraint term. Here, the corresponding weighted matrix is derived from the simultaneous algebraic reconstruction technique (SART). We then develop a SARTtype preconditioned fixedpoint proximity algorithm to solve the optimization problem. Convergence analysis is provided for the resulting iterative algorithm. Numerical experiments demonstrate the superiority of the proposed method over several existing methods in terms of both suppressing noise and reducing computational costs. These methods include the SART without regularization and with the quadratic regularization, the traditional total variation (TV) regularized reconstruction method and the TV superiorized conjugate gradient method on the pixel grid.more » « less

We develop a projected Nesterov’s proximalgradient (PNPG) approach for sparse signal reconstruction that combines adaptive step size with Nesterov’s momentum acceleration. The objective function that we wish to minimize is the sum of a convex differentiable datafidelity (negative loglikelihood (NLL)) term and a convex regularization term. We apply sparse signal regularization where the signal belongs to a closed convex set within the closure of the domain of the NLL; the convexset constraint facilitates flexible NLL domains and accurate signal recovery. Signal sparsity is imposed using the ℓ₁norm penalty on the signal’s linear transform coefficients. The PNPG approach employs a projected Nesterov’s acceleration step with restart and a dualitybased inner iteration to compute the proximal mapping. We propose an adaptive stepsize selection scheme to obtain a good local majorizing function of the NLL and reduce the time spent backtracking. Thanks to stepsize adaptation, PNPG converges faster than the methods that do not adjust to the local curvature of the NLL. We present an integrated derivation of the momentum acceleration and proofs of O(k⁻²) objective function convergence rate and convergence of the iterates, which account for adaptive step size, inexactness of the iterative proximal mapping, and the convexset constraint. The tuning of PNPG is largely application independent. Tomographic and compressedsensing reconstruction experiments with Poisson generalized linear and Gaussian linear measurement models demonstrate the performance of the proposed approach.more » « less

We develop a framework for reconstructing images that are sparse in an appropriate transform domain from polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and incidentenergy spectrum are unknown. Assuming that the object that we wish to reconstruct consists of a single material, we obtain a parsimonious measurementmodel parameterization by changing the integral variable from photon energy to mass attenuation, which allows us to combine the variations brought by the unknown incident spectrum and mass attenuation into a single unknown massattenuation spectrum function; the resulting measurement equation has the Laplaceintegral form. The massattenuation spectrum is then expanded into basis functions using B splines of order one. We consider a Poisson noise model and establish conditions for biconvexity of the corresponding negative loglikelihood (NLL) function with respect to the densitymap and massattenuation spectrum parameters. We derive a blockcoordinate descent algorithm for constrained minimization of a penalized NLL objective function, where penalty terms ensure nonnegativity of the massattenuation spline coefficients and nonnegativity and gradientmap sparsity of the densitymap image, imposed using a convex totalvariation (TV) norm; the resulting objective function is biconvex. This algorithm alternates between a Nesterov’s proximalgradient (NPG) step and a limitedmemory BroydenFletcherGoldfarbShanno with box constraints (LBFGSB) iteration for updating the image and massattenuation spectrum parameters, respectively. We prove the KurdykaŁojasiewicz property of the objective function, which is important for establishing local convergence of blockcoordinate descent schemes in biconvex optimization problems. Our framework applies to other NLLs and signalsparsity penalties, such as lognormal NLL and ℓ₁ norm of 2D discrete wavelet transform (DWT) image coefficients. Numerical experiments with simulated and real Xray CT data demonstrate the performance of the proposed scheme.more » « less

Abstract This paper studies several solution paths of sparse quadratic minimization problems as a function of the weighing parameter of the biobjective of estimation loss versus solution sparsity. Three such paths are considered: the “
path” where the discontinuous$$\ell _0$$ ${\ell}_{0}$ function provides the exact sparsity count; the “$$\ell _0$$ ${\ell}_{0}$ path” where the$$\ell _1$$ ${\ell}_{1}$ function provides a convex surrogate of sparsity count; and the “capped$$\ell _1$$ ${\ell}_{1}$ path” where the nonconvex nondifferentiable capped$$\ell _1$$ ${\ell}_{1}$ function aims to enhance the$$\ell _1$$ ${\ell}_{1}$ approximation. Serving different purposes, each of these three formulations is different from each other, both analytically and computationally. Our results deepen the understanding of (old and new) properties of the associated paths, highlight the pros, cons, and tradeoffs of these sparse optimization models, and provide numerical evidence to support the practical superiority of the capped$$\ell _1$$ ${\ell}_{1}$ path. Our study of the capped$$\ell _1$$ ${\ell}_{1}$ path is interesting in its own right as the path pertains to computable directionally stationary (= strongly locally minimizing in this context, as opposed to globally optimal) solutions of a parametric nonconvex nondifferentiable optimization problem. Motivated by classical parametric quadratic programming theory and reinforced by modern statistical learning studies, both casting an exponential perspective in fully describing such solution paths, we also aim to address the question of whether some of them can be fully traced in strongly polynomial time in the problem dimensions. A major conclusion of this paper is that a path of directional stationary solutions of the capped$$\ell _1$$ ${\ell}_{1}$ regularized problem offers interesting theoretical properties and practical compromise between the$$\ell _1$$ ${\ell}_{1}$ path and the$$\ell _0$$ ${\ell}_{0}$ path. Indeed, while the$$\ell _1$$ ${\ell}_{1}$ path is computationally prohibitive and greatly handicapped by the repeated solution of mixedinteger nonlinear programs, the quality of$$\ell _0$$ ${\ell}_{0}$ path, in terms of the two criteria—loss and sparsity—in the estimation objective, is inferior to the capped$$\ell _1$$ ${\ell}_{1}$ path; the latter can be obtained efficiently by a combination of a parametric pivotinglike scheme supplemented by an algorithm that takes advantage of the Zmatrix structure of the loss function.$$\ell _1$$ ${\ell}_{1}$