skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Distributed Black-Box optimization via Error Correcting Codes
We introduce a novel distributed derivative-free optimization framework that is resilient to stragglers. The proposed method employs coded search directions at which the objective function is evaluated, and a decoding step to find the next iterate. Our framework can be seen as an extension of evolution strategies and structured exploration methods where structured search directions were utilized. As an application, we consider black-box adversarial attacks on deep convolutional neural networks. Our numerical experiments demonstrate a significant improvement in the computation times.  more » « less
Award ID(s):
1838179
PAR ID:
10128379
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
Page Range / eLocation ID:
246 to 252
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. An influence diagram is a graphical representation of sequential decision-making under uncertainty, defining a structured decision problem by conditional probability functions and additive utility functions over discrete state and action variables. The task of finding the maximum expected utility of influence diagrams is closely related to the cost-optimal probabilistic planning, stochastic programmings, or model-based reinforcement learning. In this position paper, we address the heuristic search for solving influence diagram, where we generate admissible heuristic functions from graph decomposition schemes. Then, we demonstrate how such heuristics can guide an AND/OR branch and bound search. Finally, we briefly discuss the future directions for improving the quality of heuristic functions and search strategies. 
    more » « less
  2. Multicopy search structures such as log-structured merge (LSM) trees are optimized for high insert/update/delete (collectively known as upsert) performance. In such data structures, an upsert on keyk, which adds (k,v) wherevcan be a value or a tombstone, is added to the root node even ifkis already present in other nodes. Thus there may be multiple copies ofkin the search structure. A search onkaims to return the value associated with the most recent upsert. We present a general framework for verifying linearizability of concurrent multicopy search structures that abstracts from the underlying representation of the data structure in memory, enabling proof-reuse across diverse implementations. Based on our framework, we propose template algorithms for (a) LSM structures forming arbitrary directed acyclic graphs and (b) differential file structures, and formally verify these templates in the concurrent separation logic Iris. We also instantiate the LSM template to obtain the first verified concurrent in-memory LSM tree implementation. 
    more » « less
  3. Templated graphical models (TGMs) encode model structure using rules that capture recurring relationships between multiple random variables. While the rules in TGMs are interpretable, it is not clear how they can be used to generate explanations for the individual predictions of the model. Further, learning these rules from data comes with high computational costs: it typically requires an expensive combinatorial search over the space of rules and repeated optimization over rule weights. In this work, we propose a new structure learning algorithm, Explainable Structured Model Search (ESMS), that learns a templated graphical model and an explanation framework for its predictions. ESMS uses a novel search procedure to efficiently search the space of models and discover models that trade-off predictive accuracy and explainability. We introduce the notion of relational stability and prove that our proposed explanation framework is stable. Further, our proposed piecewise pseudolikelihood (PPLL) objective does not require re-optimizing the rule weights across models during each iteration of the search. In our empirical evaluation on three realworld datasets, we show that our proposed approach not only discovers models that are explainable, but also significantly outperforms existing state-out-the-art structure learning approaches. 
    more » « less
  4. Persistent homology is a tool that can be employed to summarize the shape of data by quantifying homological features. When the data is an object in R d , the (augmented) persistent homology transform ((A)PHT) is a family of persistence diagrams, parameterized by directions in the ambient space. A recent advance in understanding the PHT used the framework of reconstruction in order to find finite a set of directions to faithfully represent the shape, a result that is of both theoretical and practical interest. In this paper, we improve upon this result and present an improved algorithm for graph— and, more generally one-skeleton—reconstruction. The improvement comes in reconstructing the edges, where we use a radial binary (multi-)search. The binary search employed takes advantage of the fact that the edges can be ordered radially with respect to a reference plane, a feature unique to graphs. 
    more » « less
  5. We present ARQ, a systematic framework for creating cryptographic schemes that handle range aggregate queries (sum, minimum, median, and mode) over encrypted datasets. Our framework does not rely on trusted hardware or specialized cryptographic primitives such as property-preserving or homomorphic encryption. Instead, ARQ unifies structures from the plaintext data management community with existing structured encryption primitives. We prove how such combinations yield efficient (and secure) constructions in the encrypted setting. We also propose a series of domain reduction techniques that can improve the space efficiency of our schemes against sparse datasets at the cost of small leakage. As part of this work, we designed and implemented a new, open-source, encrypted search library called Arca and implemented the ARQ framework using this library in order to evaluate ARQ’s practicality. Our experiments on real-world datasets demonstrate the efficiency of the schemes derived from ARQ in comparison to prior work. 
    more » « less