skip to main content


Title: Tight decomposition functions for mixed monotonicity
Mixed monotonicity is a property of a system’s vector field that says that the vector field admits a decomposition function, in a lifted space, that has some order preserving properties. It is recently shown that this property allows one to efficiently over-approximate the system’s onestep reachable set with a hyperinterval, which is obtained by evaluating the vector field’s decomposition function at two points. Such decomposition functions are usually not unique and some decompositions may not be tight in the sense that the resulting hyperintervals are not the smallest ones that contain the exact one-step reachable set, which leads to conservative over-approximation. In this paper, we show that for a general class of functions, there exists a tight decomposition, which can be implicitly constructed as the solution of certain optimization problems. This implies that any function from Rn to Rm (hence any forward complete system) is mixed-monotone. However, the usefulness of the constructed tight decomposition functions is limited by the fact that it might not be possible to evaluate them efficiently. We show that under certain conditions, the tight decompositions can reduce to a function with explicit expression, which can be directly evaluated. This result suggests that it is not mixed monotonicity itself, but other extra properties, which lead to explicitly evaluatable decomposition functions, that enable efficient and tight hyperinterval over-approximationof reachable sets.  more » « less
Award ID(s):
1553873
NSF-PAR ID:
10132223
Author(s) / Creator(s):
;
Date Published:
Journal Name:
58th IEEE Conference on Decision and Control (CDC)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. N. Matni, M. Morari (Ed.)
    This paper proposes a computationally efficient framework, based on interval analysis, for rigorous verification of nonlinear continuous-time dynamical systems with neural network controllers. Given a neural network, we use an existing verification algorithm to construct inclusion functions for its input-output behavior. Inspired by mixed monotone theory, we embed the closed-loop dynamics into a larger system using an inclusion function of the neural network and a decomposition function of the open-loop system. This embedding provides a scalable approach for safety analysis of the neural control loop while preserving the nonlinear structure of the system. We show that one can efficiently compute hyper-rectangular over-approximations of the reachable sets using a single trajectory of the embedding system. We design an algorithm to leverage this computational advantage through partitioning strategies, improving our reachable set estimates while balancing its runtime with tunable parameters. We demonstrate the performance of this algorithm through two case studies. First, we demonstrate this method’s strength in complex nonlinear environments. Then, we show that our approach matches the performance of the state-of-the art verification algorithm for linear discretized systems. 
    more » « less
  2. Abstract

    This paper will study almost everywhere behaviors of functions on partition spaces of cardinals possessing suitable partition properties. Almost everywhere continuity and monotonicity properties for functions on partition spaces will be established. These results will be applied to distinguish the cardinality of certain subsets of the power set of partition cardinals.

    The following summarizes the main results proved under suitable partition hypotheses.

    If$\kappa $is a cardinal,$\epsilon < \kappa $,${\mathrm {cof}}(\epsilon ) = \omega $,$\kappa \rightarrow _* (\kappa )^{\epsilon \cdot \epsilon }_2$and$\Phi : [\kappa ]^\epsilon _* \rightarrow \mathrm {ON}$, then$\Phi $satisfies the almost everywhere short length continuity property: There is a club$C \subseteq \kappa $and a$\delta < \epsilon $so that for all$f,g \in [C]^\epsilon _*$, if$f \upharpoonright \delta = g \upharpoonright \delta $and$\sup (f) = \sup (g)$, then$\Phi (f) = \Phi (g)$.

    If$\kappa $is a cardinal,$\epsilon $is countable,$\kappa \rightarrow _* (\kappa )^{\epsilon \cdot \epsilon }_2$holds and$\Phi : [\kappa ]^\epsilon _* \rightarrow \mathrm {ON}$, then$\Phi $satisfies the strong almost everywhere short length continuity property: There is a club$C \subseteq \kappa $and finitely many ordinals$\delta _0, ..., \delta _k \leq \epsilon $so that for all$f,g \in [C]^\epsilon _*$, if for all$0 \leq i \leq k$,$\sup (f \upharpoonright \delta _i) = \sup (g \upharpoonright \delta _i)$, then$\Phi (f) = \Phi (g)$.

    If$\kappa $satisfies$\kappa \rightarrow _* (\kappa )^\kappa _2$,$\epsilon \leq \kappa $and$\Phi : [\kappa ]^\epsilon _* \rightarrow \mathrm {ON}$, then$\Phi $satisfies the almost everywhere monotonicity property: There is a club$C \subseteq \kappa $so that for all$f,g \in [C]^\epsilon _*$, if for all$\alpha < \epsilon $,$f(\alpha ) \leq g(\alpha )$, then$\Phi (f) \leq \Phi (g)$.

    Suppose dependent choice ($\mathsf {DC}$),${\omega _1} \rightarrow _* ({\omega _1})^{\omega _1}_2$and the almost everywhere short length club uniformization principle for${\omega _1}$hold. Then every function$\Phi : [{\omega _1}]^{\omega _1}_* \rightarrow {\omega _1}$satisfies a finite continuity property with respect to closure points: Let$\mathfrak {C}_f$be the club of$\alpha < {\omega _1}$so that$\sup (f \upharpoonright \alpha ) = \alpha $. There is a club$C \subseteq {\omega _1}$and finitely many functions$\Upsilon _0, ..., \Upsilon _{n - 1} : [C]^{\omega _1}_* \rightarrow {\omega _1}$so that for all$f \in [C]^{\omega _1}_*$, for all$g \in [C]^{\omega _1}_*$, if$\mathfrak {C}_g = \mathfrak {C}_f$and for all$i < n$,$\sup (g \upharpoonright \Upsilon _i(f)) = \sup (f \upharpoonright \Upsilon _i(f))$, then$\Phi (g) = \Phi (f)$.

    Suppose$\kappa $satisfies$\kappa \rightarrow _* (\kappa )^\epsilon _2$for all$\epsilon < \kappa $. For all$\chi < \kappa $,$[\kappa ]^{<\kappa }$does not inject into${}^\chi \mathrm {ON}$, the class of$\chi $-length sequences of ordinals, and therefore,$|[\kappa ]^\chi | < |[\kappa ]^{<\kappa }|$. As a consequence, under the axiom of determinacy$(\mathsf {AD})$, these two cardinality results hold when$\kappa $is one of the following weak or strong partition cardinals of determinacy:${\omega _1}$,$\omega _2$,$\boldsymbol {\delta }_n^1$(for all$1 \leq n < \omega $) and$\boldsymbol {\delta }^2_1$(assuming in addition$\mathsf {DC}_{\mathbb {R}}$).

     
    more » « less
  3. Given a set of facilities and clients, and costs to open facilities, the classic facility location problem seeks to open a set of facilities and assign each client to one open facility to minimize the cost of opening the chosen facilities and the total distance of the clients to their assigned open facilities. Such an objective may induce an unequal cost over certain socioeconomic groups of clients (i.e., total distance traveled by clients in such a group). This is important when planning the location of socially relevant facilities such as emergency rooms. In this work, we consider a fair version of the problem where we are given 𝑟 clients groups that partition the set of clients, and the distance of a given group is defined as the average distance of clients in the group to their respective open facilities. The objective is to minimize the Minkowski 𝑝-norm of vector of group distances, to penalize high access costs to open facilities across 𝑟 groups of clients. This generalizes classic facility location (𝑝 = 1) and the minimization of the maximum group distance (𝑝 = ∞). However, in practice, fairness criteria may not be explicit or even known to a decision maker, and it is often unclear how to select a specific "𝑝" to model the cost of unfairness. To get around this, we study the notion of solution portfolios where for a fixed problem instance, we seek a small portfolio of solutions such that for any Minkowski norm 𝑝, one of these solutions is an 𝑂(1)-approximation. Using the geometric relationship between various 𝑝-norms, we show the existence of a portfolio of cardinality 𝑂(log 𝑟), and a lower bound of (\sqrt{log r}). There may not be common structure across different solutions in this portfolio, which can make planning difficult if the notion of fairness changes over time or if the budget to open facilities is disbursed over time. For example, small changes in 𝑝 could lead to a completely different set of open facilities in the portfolio. Inspired by this, we introduce the notion of refinement, which is a family of solutions for each 𝑝-norm satisfying a combinatorial property. This property requires that (1) the set of facilities open for a higher 𝑝-norm must be a subset of the facilities open for a lower 𝑝-norm, and (2) all clients assigned to an open facility for a lower 𝑝-norm must be assigned to the same open facility for any higher 𝑝-norm. A refinement is 𝛼-approximate if the solution for each 𝑝-norm problem is an 𝛼-approximation for it. We show that it is sufficient to consider only 𝑂(log 𝑟) norms instead of all 𝑝-norms, 𝑝 ∈ [1, ∞] to construct refinements. A natural greedy algorithm for the problem gives a poly(𝑟)-approximate refinement, which we improve to poly(r^1/\sqrt{log 𝑟})-approximate using a recursive algorithm. We improve this ratio to 𝑂(log 𝑟) for the special case of tree metric for uniform facility open cost. Our recursive algorithm extends to other settings, including to a hierarchical facility location problem that models facility location problems at several levels, such as public works departments and schools. A full version of this paper can be found at https://arxiv.org/abs/2211.14873. 
    more » « less
  4. Abstract

    This paper is motivated by studying differential brain activities to multiple experimental condition presentations in intracranial electroencephalography (iEEG) experiments. Contrasting effects of experimental conditions are often zero in most regions and nonzero in some local regions, yielding locally sparse functions. Such studies are essentially a function-on-scalar regression problem, with interest being focused not only on estimating nonparametric functions but also on recovering the function supports. We propose a weighted group bridge approach for simultaneous function estimation and support recovery in function-on-scalar mixed effect models, while accounting for heterogeneity present in functional data. We use B-splines to transform sparsity of functions to its sparse vector counterpart of increasing dimension, and propose a fast nonconvex optimization algorithm using nested alternative direction method of multipliers (ADMM) for estimation. Large sample properties are established. In particular, we show that the estimated coefficient functions are rate optimal in the minimax sense under the L2 norm and resemble a phase transition phenomenon. For support estimation, we derive a convergence rate under the norm that leads to a selection consistency property under δ-sparsity, and obtain a result under strict sparsity using a simple sufficient regularity condition. An adjusted extended Bayesian information criterion is proposed for parameter tuning. The developed method is illustrated through simulations and an application to a novel iEEG data set to study multisensory integration.

     
    more » « less
  5. Hazay, Carmit ; Stam, Martijn (Ed.)
    We study the computational problem of finding a shortest non-zero vector in a rotation of ℤ𝑛 , which we call ℤ SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for ℤ SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain very special cases. However, despite all of this work, the fastest known algorithm that is proven to solve ℤ SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in 2𝑛+𝑜(𝑛) time. We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for ℤ SVP and instead ask what else we can say about the problem. E.g., can we find any non-trivial speedup over the best known SVP algorithm? And, if ℤ SVP actually is hard, then what consequences would follow? Our results are as follows. We show that ℤ SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce ℤ SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of ℤ SVP from SVP. As a consequence of this reduction, we obtain a 2𝑛/2+𝑜(𝑛) -time algorithm for ℤ SVP, i.e., the first non-trivial speedup over the best known algorithm for SVP on general lattices. (In fact, this reduction works for a more general class of lattices—semi-stable lattices with not-too-large 𝜆1 .) We show a simple public-key encryption scheme that is secure if (an appropriate variant of) ℤ SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of ℤ𝑛 from either a lattice with all non-zero vectors longer than 𝑛/log𝑛‾‾‾‾‾‾‾√ or a lattice with smoothing parameter significantly smaller than the smoothing parameter of ℤ𝑛 . The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that “ℤ𝑛 has the largest smoothing parameter.” We show a distribution of bases 𝐁 for rotations of ℤ𝑛 such that, if ℤ SVP is hard for any input basis, then ℤ SVP is hard on input 𝐁 . This gives a satisfying theoretical resolution to the problem of sampling hard bases for ℤ𝑛 , which was studied by Blanks and Miller [9]. This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices [15], and they also used this to analyze the security of a public-key encryption scheme. Similar ideas also appeared in [5, 11, 20] in different contexts.) We perform experiments to determine how practical basis reduction performs on bases of ℤ𝑛 that are generated in different ways and how heuristic sieving algorithms perform on ℤ𝑛 . Our basis reduction experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of algorithms (i.e., larger block sizes) and study the “provably hard” distribution of bases described above. Our sieving experiments confirm that heuristic sieving algorithms perform as expected on ℤ𝑛 . 
    more » « less