skip to main content


Title: First-Order Logic for Flow-Limited Authorization
We present the Flow-Limited Authorization First-Order Logic (FLAFOL), a logic for reasoning about authorization decisions in the presence of information-flow policies. We formalize the FLAFOL proof system, characterize its proof-theoretic properties, and develop its security guarantees. In particular, FLAFOL is the first logic to provide a non-interference guarantee while supporting all connectives of first-order logic. Furthermore, this guarantee is the first to combine the notions of non-interference from both authorization logic and information-flow systems. All the theorems in this paper are proven in Coq.  more » « less
Award ID(s):
1750060
NSF-PAR ID:
10249861
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2020 IEEE 33rd Computer Security Foundations Symposium (CSF)
Page Range / eLocation ID:
123 to 138
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Faceted execution is a linguistic paradigm for dynamic information-flow control with the distinguishing feature that program values may be faceted. Such values represent multiple versions or facets at once, for different security labels. This enables policy-agnostic programming: a paradigm permitting expressive privacy policies to be declared, independent of program logic. Although faceted execution prevents information leakage at runtime, it does not guarantee the absence of failure due to policy violations. By contrast with static mechanisms (such as security type systems), dynamic information-flow control permits arbitrarily expressive and dynamic privacy policies but imposes significant runtime overhead and delays discovery of any possible violations. In this paper, we present the two different abstract interpretations for faceted execution in the presence of first-class policies. We first present an abstraction which allows one to reason statically about the shape of facets at each program point. This abstraction is useful for statically proving the absence of runtime errors and eliminating runtime checks related to facets. Reasoning statically about the contents of faceted values, however, is complicated by the presence of first-class security labels, especially because abstract labels may conflate more than one runtime label. To address these issues, we also develop a more precise abstraction that relies on an analysis tracking singleton heap abstractions. We present an implementation of our coarse abstraction in Racket and demonstrate its performance on several sample programs. We conclude by showing how our precise domain can be used to verify information-flow properties. 
    more » « less
  2. The densest subgraph problem in a graph (\dsg), in the simplest form, is the following. Given an undirected graph $G=(V,E)$ find a subset $S \subseteq V$ of vertices that maximizes the ratio $|E(S)|/|S|$ where $E(S)$ is the set of edges with both endpoints in $S$. \dsg and several of its variants are well-studied in theory and practice and have many applications in data mining and network analysis. In this paper we study fast algorithms and structural aspects of \dsg via the lens of \emph{supermodularity}. For this we consider the densest supermodular subset problem (\dssp): given a non-negative supermodular function $f: 2^V \rightarrow \mathbb{R}_+$, maximize $f(S)/|S|$. For \dsg we describe a simple flow-based algorithm that outputs a $(1-\eps)$-approximation in deterministic $\tilde{O}(m/\eps)$ time where $m$ is the number of edges. Our algorithm is the first to have a near-linear dependence on $m$ and $1/\eps$ and improves previous methods based on an LP relaxation. It generalizes to hypergraphs, and also yields a faster algorithm for directed \dsg. Greedy peeling algorithms have been very popular for \dsg and several variants due to their efficiency, empirical performance, and worst-case approximation guarantees. We describe a simple peeling algorithm for \dssp and analyze its approximation guarantee in a fashion that unifies several existing results. Boob et al.\ \cite{bgpstww-20} developed an \emph{iterative} peeling algorithm for \dsg which appears to work very well in practice, and made a conjecture about its convergence to optimality. We affirmatively answer their conjecture, and in fact prove that a natural generalization of their algorithm converges to a $(1-\eps)$-approximation for \emph{any} supermodular function $f$; the key to our proof is to consider an LP formulation that is derived via the \Lovasz extension of a supermodular function. For \dsg the bound on the number of iterations we prove is $O(\frac{\Delta \ln |V|}{\lambda^*}\cdot \frac{1}{\eps^2})$ where $\Delta$ is the maximum degree and $\lambda^*$ is the optimum value. Our work suggests that iterative peeling can be an effective heuristic for several objectives considered in the literature. Finally, we show that the $2$-approximation for densest-at-least-$k$ subgraph \cite{ks-09} extends to the supermodular setting. We also give a unified analysis of the peeling algorithm for this problem, and via this analysis derive an approximation guarantee for a generalization of \dssp to maximize $f(S)/g(|S|)$ for a concave function $g$. 
    more » « less
  3. Mining pools decrease the variance in the income of cryptocurrency miners (compared to solo mining) by distributing rewards to participating miners according to the shares submitted over a period of time. The most common definition of a “share” is a proof-of-work for a difficulty level lower than that required for block authorization—for example, a hash with at least 65 leading zeroes (in binary) rather than at least 75. The first contribution of this paper is to investigate more sophisticated approaches to pool reward distribution that use multiple classes of shares—for example, corresponding to differing numbers of leading zeroes—and assign different rewards to shares from different classes. What’s the best way to use such finer-grained information, and how much can it help? We prove that the answer is not at all: using the additional information can only increase the variance in rewards experienced by every miner. Our second contribution is to identify variance-optimal reward-sharing schemes. Here, we first prove that pay-per-share rewards simultaneously minimize the variance of all miners over all reward-sharing schemes with long-run rewards proportional to miners’ hash rates. We then show that, if we impose natural restrictions including a no-deficit condition on reward-sharing schemes, then the pay-per-last-N-shares method is optimal. 
    more » « less
  4. One of the standard correctness criteria for gradual typing is the dynamic gradual guarantee, which ensures that loosening type annotations in a program does not affect its behavior in arbitrary ways. Though natural, prior work has pointed out that the guarantee does not hold of any gradual type system for information-flow control. Toro et al.'s GSLRef language, for example, had to abandon it to validate noninterference. We show that we can solve this conflict by avoiding a feature of prior proposals: type-guided classification, or the use of type ascription to classify data. Gradual languages require run-time secrecy labels to enforce security dynamically; if type ascription merely checks these labels without modifying them (that is, without classifying data), it cannot violate the dynamic gradual guarantee. We demonstrate this idea with GLIO, a gradual type system based on the LIO library that enforces both the gradual guarantee and noninterference, featuring higher-order functions, general references, coarsegrained information-flow control, security subtyping and first-class labels. We give the language a domain-theoretic semantics, using Pitts' framework of relational structures to prove noninterference and the dynamic gradual guarantee. 
    more » « less
  5. null (Ed.)
    Abstract In this paper, we study the problem of formal verification for Answer Set Programming (ASP), namely, obtaining a formal proof showing that the answer sets of a given (non-ground) logic program P correctly correspond to the solutions to the problem encoded by P , regardless of the problem instance. To this aim, we use a formal specification language based on ASP modules, so that each module can be proved to capture some informal aspect of the problem in an isolated way. This specification language relies on a novel definition of (possibly nested, first order) program modules that may incorporate local hidden atoms at different levels. Then, verifying the logic program P amounts to prove some kind of equivalence between P and its modular specification. 
    more » « less