Abstract This paper introduces a nonconvex approach for sparse signal recovery, proposing a novel model termed the$$\tau _2$$ -model, which utilizes the squared$$\ell _1/\ell _2$$ norms for this purpose. Our model offers an advancement over the$$\ell _0$$ norm, which is often computationally intractable and less effective in practical scenarios. Grounded in the concept of effective sparsity, our approach robustly measures the number of significant coordinates in a signal, making it a powerful alternative for sparse signal estimation. The$$\tau _2$$ -model is particularly advantageous due to its computational efficiency and practical applicability. We detail two accompanying algorithms based on Dinkelbach’s procedure and a difference of convex functions strategy. The first algorithm views the model as a linear-constrained quadratic programming problem in noiseless scenarios and as a quadratic-constrained quadratic programming problem in noisy scenarios. The second algorithm, capable of handling both noiseless and noisy cases, is based on the alternating direction linearized proximal method of multipliers. We also explore the model’s properties, including the existence of solutions under certain conditions, and discuss the convergence properties of the algorithms. Numerical experiments with various sensing matrices validate the effectiveness of our proposed model.
more »
« less
A lifted ℓ 1 framework for sparse recovery
Abstract We introduce a lifted $$\ell _1$$ (LL1) regularization framework for the recovery of sparse signals. The proposed LL1 regularization is a generalization of several popular regularization methods in the field and is motivated by recent advancements in re-weighted $$\ell _1$$ approaches for sparse recovery. Through a comprehensive analysis of the relationships between existing methods, we identify two distinct types of lifting functions that guarantee equivalence to the $$\ell _0$$ minimization problem, which is a key objective in sparse signal recovery. To solve the LL1 regularization problem, we propose an algorithm based on the alternating direction method of multipliers and provide proof of convergence for the unconstrained formulation. Our experiments demonstrate the improved performance of the LL1 regularization compared with state-of-the-art methods, confirming the effectiveness of our proposed framework. In conclusion, the LL1 regularization presents a promising and flexible approach to sparse signal recovery and invites further research in this area.
more »
« less
- Award ID(s):
- 1846690
- PAR ID:
- 10504601
- Publisher / Repository:
- oxford academic
- Date Published:
- Journal Name:
- Information and Inference: A Journal of the IMA
- Volume:
- 13
- Issue:
- 1
- ISSN:
- 2049-8772
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract We studyinexactfixed-point proximity algorithms for solving a class of sparse regularization problems involving the$$\ell _0$$ norm. Specifically, the$$\ell _0$$ model has an objective function that is the sum of a convex fidelity term and a Moreau envelope of the$$\ell _0$$ norm regularization term. Such an$$\ell _0$$ model is non-convex. Existing exact algorithms for solving the problems require the availability of closed-form formulas for the proximity operator of convex functions involved in the objective function. When such formulas are not available, numerical computation of the proximity operator becomes inevitable. This leads to inexact iteration algorithms. We investigate in this paper how the numerical error for every step of the iteration should be controlled to ensure global convergence of the inexact algorithms. We establish a theoretical result that guarantees the sequence generated by the proposed inexact algorithm converges to a local minimizer of the optimization problem. We implement the proposed algorithms for three applications of practical importance in machine learning and image science, which include regression, classification, and image deblurring. The numerical results demonstrate the convergence of the proposed algorithm and confirm that local minimizers of the$$\ell _0$$ models found by the proposed inexact algorithm outperform global minimizers of the corresponding$$\ell _1$$ models, in terms of approximation accuracy and sparsity of the solutions.more » « less
-
Abstract This paper studies several solution paths of sparse quadratic minimization problems as a function of the weighing parameter of the bi-objective of estimation loss versus solution sparsity. Three such paths are considered: the “$$\ell _0$$ -path” where the discontinuous$$\ell _0$$ -function provides the exact sparsity count; the “$$\ell _1$$ -path” where the$$\ell _1$$ -function provides a convex surrogate of sparsity count; and the “capped$$\ell _1$$ -path” where the nonconvex nondifferentiable capped$$\ell _1$$ -function aims to enhance the$$\ell _1$$ -approximation. Serving different purposes, each of these three formulations is different from each other, both analytically and computationally. Our results deepen the understanding of (old and new) properties of the associated paths, highlight the pros, cons, and tradeoffs of these sparse optimization models, and provide numerical evidence to support the practical superiority of the capped$$\ell _1$$ -path. Our study of the capped$$\ell _1$$ -path is interesting in its own right as the path pertains to computable directionally stationary (= strongly locally minimizing in this context, as opposed to globally optimal) solutions of a parametric nonconvex nondifferentiable optimization problem. Motivated by classical parametric quadratic programming theory and reinforced by modern statistical learning studies, both casting an exponential perspective in fully describing such solution paths, we also aim to address the question of whether some of them can be fully traced in strongly polynomial time in the problem dimensions. A major conclusion of this paper is that a path of directional stationary solutions of the capped$$\ell _1$$ -regularized problem offers interesting theoretical properties and practical compromise between the$$\ell _0$$ -path and the$$\ell _1$$ -path. Indeed, while the$$\ell _0$$ -path is computationally prohibitive and greatly handicapped by the repeated solution of mixed-integer nonlinear programs, the quality of$$\ell _1$$ -path, in terms of the two criteria—loss and sparsity—in the estimation objective, is inferior to the capped$$\ell _1$$ -path; the latter can be obtained efficiently by a combination of a parametric pivoting-like scheme supplemented by an algorithm that takes advantage of the Z-matrix structure of the loss function.more » « less
-
Summary We provide adaptive inference methods, based on $$\ell _1$$ regularization, for regular (semiparametric) and nonregular (nonparametric) linear functionals of the conditional expectation function. Examples of regular functionals include average treatment effects, policy effects, and derivatives. Examples of nonregular functionals include average treatment effects, policy effects, and derivatives conditional on a covariate subvector fixed at a point. We construct a Neyman orthogonal equation for the target parameter that is approximately invariant to small perturbations of the nuisance parameters. To achieve this property, we include the Riesz representer for the functional as an additional nuisance parameter. Our analysis yields weak ‘double sparsity robustness’: either the approximation to the regression or the approximation to the representer can be ‘completely dense’ as long as the other is sufficiently ‘sparse’. Our main results are nonasymptotic and imply asymptotic uniform validity over large classes of models, translating into honest confidence bands for both global and local parameters.more » « less
-
This letter addresses the problem of estimating block sparse signal with unknown group partitions in a multiple measurement vector (MMV) setup. We propose a Bayesian framework by applying an adaptive total variation (TV) penalty on the hyper-parameter space of the sparse signal. The main contributions are two-fold. 1) We extend the TV penalty beyond the immediate neighbor, thus enabling better capture of the signal structure. 2) A dynamic framework is provided to learn the regularization weights for the TV penalty based on the statistical dependencies between the entries of tentative blocks, thus eliminating the need for fine-tuning. The superior performance of the proposed method is empirically demonstrated by extensive computer simulations with the state-of-art benchmarks. The proposed solution exhibits both excellent performance and robustness against sparsity model mismatch.more » « less
An official website of the United States government

