Ramanujan’s partition congruences modulo
Sparsity finds applications in diverse areas such as statistics, machine learning, and signal processing. Computations over sparse structures are less complex compared to their dense counterparts and need less storage. This paper proposes a heuristic method for retrieving sparse approximate solutions of optimization problems via minimizing the $0<p<1$
 NSFPAR ID:
 10489902
 Publisher / Repository:
 Springer Science + Business Media
 Date Published:
 Journal Name:
 EURASIP Journal on Advances in Signal Processing
 Volume:
 2024
 Issue:
 1
 ISSN:
 16876180
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract assert that$$\ell \in \{5, 7, 11\}$$ $\ell \in \{5,7,11\}$ where$$\begin{aligned} p(\ell n+\delta _{\ell })\equiv 0\pmod {\ell }, \end{aligned}$$ $\begin{array}{c}p(\ell n+{\delta}_{\ell})\equiv 0\phantom{\rule{0ex}{0ex}}(mod\phantom{\rule{0ex}{0ex}}\ell ),\end{array}$ satisfies$$0<\delta _{\ell }<\ell $$ $0<{\delta}_{\ell}<\ell $ By proving Subbarao’s conjecture, Radu showed that there are no such congruences when it comes to parity. There are infinitely many odd (resp. even) partition numbers in every arithmetic progression. For primes$$24\delta _{\ell }\equiv 1\pmod {\ell }.$$ $24{\delta}_{\ell}\equiv 1\phantom{\rule{0ex}{0ex}}(mod\phantom{\rule{0ex}{0ex}}\ell ).$ we give a new proof of the conclusion that there are infinitely many$$\ell \ge 5,$$ $\ell \ge 5,$m for which is odd. This proof uses a generalization, due to the second author and Ramsey, of a result of Mazur in his classic paper on the Eisenstein ideal. We also refine a classical criterion of Sturm for modular form congruences, which allows us to show that the smallest such$$p(\ell m+\delta _{\ell })$$ $p(\ell m+{\delta}_{\ell})$m satisfies representing a significant improvement to the previous bound.$$m<(\ell ^21)/24,$$ $m<({\ell}^{2}1)/24,$ 
Abstract We study
inexact fixedpoint proximity algorithms for solving a class of sparse regularization problems involving the norm. Specifically, the$$\ell _0$$ ${\ell}_{0}$ model has an objective function that is the sum of a convex fidelity term and a Moreau envelope of the$$\ell _0$$ ${\ell}_{0}$ norm regularization term. Such an$$\ell _0$$ ${\ell}_{0}$ model is nonconvex. Existing exact algorithms for solving the problems require the availability of closedform formulas for the proximity operator of convex functions involved in the objective function. When such formulas are not available, numerical computation of the proximity operator becomes inevitable. This leads to inexact iteration algorithms. We investigate in this paper how the numerical error for every step of the iteration should be controlled to ensure global convergence of the inexact algorithms. We establish a theoretical result that guarantees the sequence generated by the proposed inexact algorithm converges to a local minimizer of the optimization problem. We implement the proposed algorithms for three applications of practical importance in machine learning and image science, which include regression, classification, and image deblurring. The numerical results demonstrate the convergence of the proposed algorithm and confirm that local minimizers of the$$\ell _0$$ ${\ell}_{0}$ models found by the proposed inexact algorithm outperform global minimizers of the corresponding$$\ell _0$$ ${\ell}_{0}$ models, in terms of approximation accuracy and sparsity of the solutions.$$\ell _1$$ ${\ell}_{1}$ 
Abstract This paper studies several solution paths of sparse quadratic minimization problems as a function of the weighing parameter of the biobjective of estimation loss versus solution sparsity. Three such paths are considered: the “
path” where the discontinuous$$\ell _0$$ ${\ell}_{0}$ function provides the exact sparsity count; the “$$\ell _0$$ ${\ell}_{0}$ path” where the$$\ell _1$$ ${\ell}_{1}$ function provides a convex surrogate of sparsity count; and the “capped$$\ell _1$$ ${\ell}_{1}$ path” where the nonconvex nondifferentiable capped$$\ell _1$$ ${\ell}_{1}$ function aims to enhance the$$\ell _1$$ ${\ell}_{1}$ approximation. Serving different purposes, each of these three formulations is different from each other, both analytically and computationally. Our results deepen the understanding of (old and new) properties of the associated paths, highlight the pros, cons, and tradeoffs of these sparse optimization models, and provide numerical evidence to support the practical superiority of the capped$$\ell _1$$ ${\ell}_{1}$ path. Our study of the capped$$\ell _1$$ ${\ell}_{1}$ path is interesting in its own right as the path pertains to computable directionally stationary (= strongly locally minimizing in this context, as opposed to globally optimal) solutions of a parametric nonconvex nondifferentiable optimization problem. Motivated by classical parametric quadratic programming theory and reinforced by modern statistical learning studies, both casting an exponential perspective in fully describing such solution paths, we also aim to address the question of whether some of them can be fully traced in strongly polynomial time in the problem dimensions. A major conclusion of this paper is that a path of directional stationary solutions of the capped$$\ell _1$$ ${\ell}_{1}$ regularized problem offers interesting theoretical properties and practical compromise between the$$\ell _1$$ ${\ell}_{1}$ path and the$$\ell _0$$ ${\ell}_{0}$ path. Indeed, while the$$\ell _1$$ ${\ell}_{1}$ path is computationally prohibitive and greatly handicapped by the repeated solution of mixedinteger nonlinear programs, the quality of$$\ell _0$$ ${\ell}_{0}$ path, in terms of the two criteria—loss and sparsity—in the estimation objective, is inferior to the capped$$\ell _1$$ ${\ell}_{1}$ path; the latter can be obtained efficiently by a combination of a parametric pivotinglike scheme supplemented by an algorithm that takes advantage of the Zmatrix structure of the loss function.$$\ell _1$$ ${\ell}_{1}$ 
Abstract The double differential cross sections of the Drell–Yan lepton pair (
, dielectron or dimuon) production are measured as functions of the invariant mass$$\ell ^+\ell ^$$ ${\ell}^{+}{\ell}^{}$ , transverse momentum$$m_{\ell \ell }$$ ${m}_{\ell \ell}$ , and$$p_{\textrm{T}} (\ell \ell )$$ ${p}_{\text{T}}\left(\ell \ell \right)$ . The$$\varphi ^{*}_{\eta }$$ ${\phi}_{\eta}^{\ast}$ observable, derived from angular measurements of the leptons and highly correlated with$$\varphi ^{*}_{\eta }$$ ${\phi}_{\eta}^{\ast}$ , is used to probe the low$$p_{\textrm{T}} (\ell \ell )$$ ${p}_{\text{T}}\left(\ell \ell \right)$ region in a complementary way. Dilepton masses up to 1$$p_{\textrm{T}} (\ell \ell )$$ ${p}_{\text{T}}\left(\ell \ell \right)$ are investigated. Additionally, a measurement is performed requiring at least one jet in the final state. To benefit from partial cancellation of the systematic uncertainty, the ratios of the differential cross sections for various$$\,\text {Te\hspace{.08em}V}$$ $\phantom{\rule{0ex}{0ex}}\text{Te}\phantom{\rule{0ex}{0ex}}\text{V}$ ranges to those in the Z mass peak interval are presented. The collected data correspond to an integrated luminosity of 36.3$$m_{\ell \ell }$$ ${m}_{\ell \ell}$ of proton–proton collisions recorded with the CMS detector at the LHC at a centreofmass energy of 13$$\,\text {fb}^{1}$$ $\phantom{\rule{0ex}{0ex}}{\text{fb}}^{1}$ . Measurements are compared with predictions based on perturbative quantum chromodynamics, including softgluon resummation.$$\,\text {Te\hspace{.08em}V}$$ $\phantom{\rule{0ex}{0ex}}\text{Te}\phantom{\rule{0ex}{0ex}}\text{V}$ 
Abstract We study the sparsity of the solutions to systems of linear Diophantine equations with and without nonnegativity constraints. The sparsity of a solution vector is the number of its nonzero entries, which is referred to as the
norm of the vector. Our main results are new improved bounds on the minimal$$\ell _0$$ ${\ell}_{0}$ norm of solutions to systems$$\ell _0$$ ${\ell}_{0}$ , where$$A\varvec{x}=\varvec{b}$$ $Ax=b$ ,$$A\in \mathbb {Z}^{m\times n}$$ $A\in {Z}^{m\times n}$ and$${\varvec{b}}\in \mathbb {Z}^m$$ $b\in {Z}^{m}$ is either a general integer vector (lattice case) or a nonnegative integer vector (semigroup case). In certain cases, we give polynomial time algorithms for computing solutions with$$\varvec{x}$$ $x$ norm satisfying the obtained bounds. We show that our bounds are tight. Our bounds can be seen as functions naturally generalizing the rank of a matrix over$$\ell _0$$ ${\ell}_{0}$ , to other subdomains such as$$\mathbb {R}$$ $R$ . We show that these new ranklike functions are all NPhard to compute in general, but polynomialtime computable for fixed number of variables.$$\mathbb {Z}$$ $Z$