skip to main content


Title: Optimizing Sparsity over Lattices and Semigroups
Motivated by problems in optimization we study the sparsity of the solutions to systems of linear Diophantine equations and linear integer programs, i.e., the number of non-zero entries of a solution, which is often referred to as the ℓ0-norm. Our main results are improved bounds on the ℓ0-norm of sparse solutions to systems 𝐴𝑥=𝑏, where 𝐴∈ℤ𝑚×𝑛, 𝑏∈ℤ𝑚 and 𝑥 is either a general integer vector (lattice case) or a non-negative integer vector (semigroup case). In the lattice case and certain scenarios of the semigroup case, we give polynomial time algorithms for computing solutions with ℓ0-norm satisfying the obtained bounds.  more » « less
Award ID(s):
1934568
PAR ID:
10348840
Author(s) / Creator(s):
Editor(s):
Bienstock, D.
Date Published:
Journal Name:
Integer Programming and Combinatorial Optimization. IPCO 2020. Lecture Notes in Computer Science
Volume:
12125
Page Range / eLocation ID:
40 - 51
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    We study the sparsity of the solutions to systems of linear Diophantine equations with and without non-negativity constraints. The sparsity of a solution vector is the number of its nonzero entries, which is referred to as the$$\ell _0$$0-norm of the vector. Our main results are new improved bounds on the minimal$$\ell _0$$0-norm of solutions to systems$$A\varvec{x}=\varvec{b}$$Ax=b, where$$A\in \mathbb {Z}^{m\times n}$$AZm×n,$${\varvec{b}}\in \mathbb {Z}^m$$bZmand$$\varvec{x}$$xis either a general integer vector (lattice case) or a non-negative integer vector (semigroup case). In certain cases, we give polynomial time algorithms for computing solutions with$$\ell _0$$0-norm satisfying the obtained bounds. We show that our bounds are tight. Our bounds can be seen as functions naturally generalizing the rank of a matrix over$$\mathbb {R}$$R, to other subdomains such as$$\mathbb {Z}$$Z. We show that these new rank-like functions are all NP-hard to compute in general, but polynomial-time computable for fixed number of variables.

     
    more » « less
  2. Abstract

    We study families of analytic semigroups, acting on a Banach space, and depending on a parameter, and give sufficient conditions for existence of uniform with respect to the parameter norm bounds using spectral properties of the respective semigroup generators. In particular, we use estimates of the resolvent operators of the generators along vertical segments to estimate the growth/decay rate of the norm for the family of analytic semigroups. These results are applied to prove the Lyapunov linear stability of planar traveling waves of systems of reaction–diffusion equations, and the bidomain equation, important in electrophysiology.

     
    more » « less
  3. Lattice-based algorithms in cryptanalysis often search for a target vector satisfying integer linear constraints as a shortest or closest vector in some lattice. In this work, we observe that these formulations may discard non-linear information from the underlying application that can be used to distinguish the target vector even when it is far from being uniquely close or short. We formalize lattice problems augmented with a predicate distinguishing a target vector and give algorithms for solving instances of these problems. We apply our techniques to lattice-based approaches for solving the Hidden Number Problem, a popular technique for recovering secret DSA or ECDSA keys in side-channel attacks, and demonstrate that our algorithms succeed in recovering the signing key for instances that were previously believed to be unsolvable using lattice approaches. We carried out extensive experiments using our estimation and solving framework, which we also make available with this work. 
    more » « less
  4. null (Ed.)
    We consider the problem of finding a two-layer neural network with sigmoid, rectified linear unit (ReLU), or binary step activation functions that "fits" a training data set as accurately as possible as quantified by the training error; and study the following question: \emph{does a low training error guarantee that the norm of the output layer (outer norm) itself is small?} We answer affirmatively this question for the case of non-negative output weights. Using a simple covering number argument, we establish that under quite mild distributional assumptions on the input/label pairs; any such network achieving a small training error on polynomially many data necessarily has a well-controlled outer norm. Notably, our results (a) have a polynomial (in d) sample complexity, (b) are independent of the number of hidden units (which can potentially be very high), (c) are oblivious to the training algorithm; and (d) require quite mild assumptions on the data (in particular the input vector X∈ℝd need not have independent coordinates). We then leverage our bounds to establish generalization guarantees for such networks through \emph{fat-shattering dimension}, a scale-sensitive measure of the complexity class that the network architectures we investigate belong to. Notably, our generalization bounds also have good sample complexity (polynomials in d with a low degree), and are in fact near-linear for some important cases of interest. 
    more » « less
  5. We consider the problem of finding a two-layer neural network with sigmoid, rectified linear unit (ReLU), or binary step activation functions that “fits” a training data set as accurately as possible as quantified by the training error; and study the following question: does a low training error guarantee that the norm of the output layer (outer norm) itself is small? We answer affirmatively this question for the case of non-negative output weights. Using a simple covering number argument, we establish that under quite mild distributional assumptions on the input/label pairs; any such network achieving a small training error on polynomially many data necessarily has a well-controlled outer norm. Notably, our results (a) have a polynomial (in d) sample complexity, (b) are independent of the number of hidden units (which can potentially be very high), (c) are oblivious to the training algorithm; and (d) require quite mild assumptions on the data (in particular the input vector X ∈ Rd need not have independent coordinates). We then leverage our bounds to establish generalization guarantees for such networks through fat-shattering dimension, a scale-sensitive measure of the complexity class that the network architectures we investigate belong to. Notably, our generalization bounds also have good sample complexity (polynomials in d with a low degree), and are in fact near-linear for some important cases of interest. 
    more » « less