skip to main content


Title: Tight decomposition functions for mixed monotonicity
Mixed monotonicity is a property of a system’s vector field that says that the vector field admits a decomposition function, in a lifted space, that has some order preserving properties. It is recently shown that this property allows one to efficiently over-approximate the system’s onestep reachable set with a hyperinterval, which is obtained by evaluating the vector field’s decomposition function at two points. Such decomposition functions are usually not unique and some decompositions may not be tight in the sense that the resulting hyperintervals are not the smallest ones that contain the exact one-step reachable set, which leads to conservative over-approximation. In this paper, we show that for a general class of functions, there exists a tight decomposition, which can be implicitly constructed as the solution of certain optimization problems. This implies that any function from Rn to Rm (hence any forward complete system) is mixed-monotone. However, the usefulness of the constructed tight decomposition functions is limited by the fact that it might not be possible to evaluate them efficiently. We show that under certain conditions, the tight decompositions can reduce to a function with explicit expression, which can be directly evaluated. This result suggests that it is not mixed monotonicity itself, but other extra properties, which lead to explicitly evaluatable decomposition functions, that enable efficient and tight hyperinterval over-approximationof reachable sets.  more » « less
Award ID(s):
1553873
NSF-PAR ID:
10132223
Author(s) / Creator(s):
;
Date Published:
Journal Name:
58th IEEE Conference on Decision and Control (CDC)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Given a set of facilities and clients, and costs to open facilities, the classic facility location problem seeks to open a set of facilities and assign each client to one open facility to minimize the cost of opening the chosen facilities and the total distance of the clients to their assigned open facilities. Such an objective may induce an unequal cost over certain socioeconomic groups of clients (i.e., total distance traveled by clients in such a group). This is important when planning the location of socially relevant facilities such as emergency rooms. In this work, we consider a fair version of the problem where we are given π‘Ÿ clients groups that partition the set of clients, and the distance of a given group is defined as the average distance of clients in the group to their respective open facilities. The objective is to minimize the Minkowski 𝑝-norm of vector of group distances, to penalize high access costs to open facilities across π‘Ÿ groups of clients. This generalizes classic facility location (𝑝 = 1) and the minimization of the maximum group distance (𝑝 = ∞). However, in practice, fairness criteria may not be explicit or even known to a decision maker, and it is often unclear how to select a specific "𝑝" to model the cost of unfairness. To get around this, we study the notion of solution portfolios where for a fixed problem instance, we seek a small portfolio of solutions such that for any Minkowski norm 𝑝, one of these solutions is an 𝑂(1)-approximation. Using the geometric relationship between various 𝑝-norms, we show the existence of a portfolio of cardinality 𝑂(log π‘Ÿ), and a lower bound of (\sqrt{log r}). There may not be common structure across different solutions in this portfolio, which can make planning difficult if the notion of fairness changes over time or if the budget to open facilities is disbursed over time. For example, small changes in 𝑝 could lead to a completely different set of open facilities in the portfolio. Inspired by this, we introduce the notion of refinement, which is a family of solutions for each 𝑝-norm satisfying a combinatorial property. This property requires that (1) the set of facilities open for a higher 𝑝-norm must be a subset of the facilities open for a lower 𝑝-norm, and (2) all clients assigned to an open facility for a lower 𝑝-norm must be assigned to the same open facility for any higher 𝑝-norm. A refinement is 𝛼-approximate if the solution for each 𝑝-norm problem is an 𝛼-approximation for it. We show that it is sufficient to consider only 𝑂(log π‘Ÿ) norms instead of all 𝑝-norms, 𝑝 ∈ [1, ∞] to construct refinements. A natural greedy algorithm for the problem gives a poly(π‘Ÿ)-approximate refinement, which we improve to poly(r^1/\sqrt{log π‘Ÿ})-approximate using a recursive algorithm. We improve this ratio to 𝑂(log π‘Ÿ) for the special case of tree metric for uniform facility open cost. Our recursive algorithm extends to other settings, including to a hierarchical facility location problem that models facility location problems at several levels, such as public works departments and schools. A full version of this paper can be found at https://arxiv.org/abs/2211.14873. 
    more » « less
  2. Abstract

    This paper is motivated by studying differential brain activities to multiple experimental condition presentations in intracranial electroencephalography (iEEG) experiments. Contrasting effects of experimental conditions are often zero in most regions and nonzero in some local regions, yielding locally sparse functions. Such studies are essentially a function-on-scalar regression problem, with interest being focused not only on estimating nonparametric functions but also on recovering the function supports. We propose a weighted group bridge approach for simultaneous function estimation and support recovery in function-on-scalar mixed effect models, while accounting for heterogeneity present in functional data. We use B-splines to transform sparsity of functions to its sparse vector counterpart of increasing dimension, and propose a fast nonconvex optimization algorithm using nested alternative direction method of multipliers (ADMM) for estimation. Large sample properties are established. In particular, we show that the estimated coefficient functions are rate optimal in the minimax sense under the L2 norm and resemble a phase transition phenomenon. For support estimation, we derive a convergence rate under the norm that leads to a selection consistency property under Ξ΄-sparsity, and obtain a result under strict sparsity using a simple sufficient regularity condition. An adjusted extended Bayesian information criterion is proposed for parameter tuning. The developed method is illustrated through simulations and an application to a novel iEEG data set to study multisensory integration.

     
    more » « less
  3. Hazay, Carmit ; Stam, Martijn (Ed.)
    We study the computational problem of finding a shortest non-zero vector in a rotation of ℀𝑛 , which we call β„€ SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for β„€ SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain very special cases. However, despite all of this work, the fastest known algorithm that is proven to solve β„€ SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in 2𝑛+π‘œ(𝑛) time. We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for β„€ SVP and instead ask what else we can say about the problem. E.g., can we find any non-trivial speedup over the best known SVP algorithm? And, if β„€ SVP actually is hard, then what consequences would follow? Our results are as follows. We show that β„€ SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce β„€ SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of β„€ SVP from SVP. As a consequence of this reduction, we obtain a 2𝑛/2+π‘œ(𝑛) -time algorithm for β„€ SVP, i.e., the first non-trivial speedup over the best known algorithm for SVP on general lattices. (In fact, this reduction works for a more general class of latticesβ€”semi-stable lattices with not-too-large πœ†1 .) We show a simple public-key encryption scheme that is secure if (an appropriate variant of) β„€ SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of ℀𝑛 from either a lattice with all non-zero vectors longer than 𝑛/logπ‘›β€Ύβ€Ύβ€Ύβ€Ύβ€Ύβ€Ύβ€Ύβˆš or a lattice with smoothing parameter significantly smaller than the smoothing parameter of ℀𝑛 . The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that β€œβ„€π‘› has the largest smoothing parameter.” We show a distribution of bases 𝐁 for rotations of ℀𝑛 such that, if β„€ SVP is hard for any input basis, then β„€ SVP is hard on input 𝐁 . This gives a satisfying theoretical resolution to the problem of sampling hard bases for ℀𝑛 , which was studied by Blanks and Miller [9]. This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices [15], and they also used this to analyze the security of a public-key encryption scheme. Similar ideas also appeared in [5, 11, 20] in different contexts.) We perform experiments to determine how practical basis reduction performs on bases of ℀𝑛 that are generated in different ways and how heuristic sieving algorithms perform on ℀𝑛 . Our basis reduction experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of algorithms (i.e., larger block sizes) and study the β€œprovably hard” distribution of bases described above. Our sieving experiments confirm that heuristic sieving algorithms perform as expected on ℀𝑛 . 
    more » « less
  4. We present microscopic, multiple Landau level, (frustration-free and positive semi-definite) parent Hamiltonians whose ground states, realizing different quantum Hall fluids, are parton-like and whose excitations display either Abelian or non-Abelian braiding statistics. We prove ground state energy monotonicity theorems for systems with different particle numbers in multiple Landau levels, demonstrate S-duality in the case of toroidal geometry, and establish complete sets of zero modes of special Hamiltonians stabilizing parton-like states, specifically at filling factor\nu=2/3Ξ½=2/3. The emergent Entangled Pauli Principle (EPP), introduced in [Phys. Rev.Β B 98, 161118(R) (2018)] and which defines the β€œDNA” of the quantum Hall fluid, is behind the exact determination of the topological characteristics of the fluid, including charge and braiding statistics of excitations, and effective edge theory descriptions. When the closed-shell condition is satisfied, the densest (i.e., the highest density and lowest total angular momentum) zero-energy mode is a unique parton state. We conjecture that parton-like states generally span the subspace of many-body wave functions with the two-bodyMM-clustering property within any given number of Landau levels, that is, wave functions withMMth-order coincidence plane zeroes and both holomorphic and anti-holomorphic dependence on variables. General arguments are supplemented by rigorous considerations for theM=3M=3case of fermions in four Landau levels. For this case, we establish that the zero mode counting can be done by enumerating certain patterns consistent with an underlying EPP. We apply the coherent state approach of [Phys. Rev.Β X 1, 021015 (2011)] to show that the elementary (localized) bulk excitations are Fibonacci anyons. This demonstrates that the DNA associated with fractional quantum Hall states encodes all universal properties. Specifically, for parton-like states, we establish a link with tensor network structures of finite bond dimension that emerge via root level entanglement.

     
    more » « less
  5. null (Ed.)
    A graph G is called {\em self-ordered} (a.k.a asymmetric) if the identity permutation is its only automorphism. Equivalently, there is a unique isomorphism from G to any graph that is isomorphic to G. We say that G=(VE) is {\em robustly self-ordered}if the size of the symmetric difference between E and the edge-set of the graph obtained by permuting V using any permutation :VV is proportional to the number of non-fixed-points of . In this work, we initiate the study of the structure, construction and utility of robustly self-ordered graphs. We show that robustly self-ordered bounded-degree graphs exist (in abundance), and that they can be constructed efficiently, in a strong sense. Specifically, given the index of a vertex in such a graph, it is possible to find all its neighbors in polynomial-time (i.e., in time that is poly-logarithmic in the size of the graph). We provide two very different constructions, in tools and structure. The first, a direct construction, is based on proving a sufficient condition for robust self-ordering, which requires that an auxiliary graph, on {\em pairs} of vertices of the original graph, is expanding. In this case the original graph is (not only robustly self-ordered but) also expanding. The second construction proceeds in three steps: It boosts the mere existence of robustly self-ordered graphs, which provides explicit graphs of sublogarithmic size, to an efficient construction of polynomial-size graphs, and then, repeating it again, to exponential-size(robustly self-ordered) graphs that are locally constructible. This construction can yield robustly self-ordered graphs that are either expanders or highly disconnected, having logarithmic size connected components. We also consider graphs of unbounded degree, seeking correspondingly unbounded robustness parameters. We again demonstrate that such graphs (of linear degree)exist (in abundance), and that they can be constructed efficiently, in a strong sense. This turns out to require very different tools. Specifically, we show that the construction of such graphs reduces to the construction of non-malleable two-source extractors with very weak parameters but with some additional natural features. We actually show two reductions, one simpler than the other but yielding a less efficient construction when combined with the known constructions of extractors. We demonstrate that robustly self-ordered bounded-degree graphs are useful towards obtaining lower bounds on the query complexity of testing graph properties both in the bounded-degree and the dense graph models. Indeed, their robustness offers efficient, local and distance preserving reductions from testing problems on ordered structures (like sequences) to the unordered (effectively unlabeled) graphs. One of the results that we obtain, via such a reduction, is a subexponential separation between the query complexities of testing and tolerant testing of graph properties in the bounded-degree graph model. Changes to previous version: We retract the claims made in our initial posting regarding the construction of non-malleable two-source extractors (which are quasi-orthogonal) as well as the claims about the construction of relocation-detecting codes (see Theorems 1.5 and 1.6 in the original version). The source of trouble is a fundamental flaw in the proof of Lemma 9.7 (in the original version), which may as well be wrong. Hence, the original Section 9 was omitted, except that the original Section 9.3 was retained as a new Section 8.3. The original Section 8 appears as Section 8.0 and 8.1, and Section 8.2 is new. 
    more » « less