skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


Title: Type-Based Declassification for Free
This work provides a study to demonstrate the potential of using off-the-shelf programming languages and their theories to build sound language-based-security tools. Our study focuses on information flow security encompassing declassification policies that allow us to express flexible security policies needed for practical requirements. We translate security policies, with declassification, into an interface for which an unmodified standard typechecker can be applied to a source program---if the program typechecks, it provably satisfies the policy. Our proof reduces security soundness---with declassification---to the mathematical foundation of data abstraction, Reynolds' abstraction theorem.  more » « less
Award ID(s):
1718713
NSF-PAR ID:
10292871
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
22d International Conference on Formal Engineering Methods
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Scripts on webpages could steal sensitive user data. Much work has been done, both in modeling and implementation, to enforce information flow control (IFC) of webpages to mitigate such attacks. It is common to model scripts running in an IFC mechanism as a reactive program. However, this model does not account for dynamic script behavior such as user action simulation, new DOM element generation, or new event handler registration, which could leak information. In this paper, we investigate how to secure sensitive user information, while maintaining the flexibility of declassification, even in the presence of active attackers-those who can perform the aforementioned actions. Our approach extends prior work on secure-multi-execution with stateful declassification by treating script-generated content specially to ensure that declassification policies cannot be manipulated by them. We use a knowledge-based progress-insensitive definition of security and prove that our enforcement mechanism is sound. We further prove that our enforcement mechanism is precise and has robust declassification (i.e. active attackers cannot learn more than their passive counterpart). 
    more » « less
  2. null (Ed.)
    Faceted execution is a linguistic paradigm for dynamic information-flow control with the distinguishing feature that program values may be faceted. Such values represent multiple versions or facets at once, for different security labels. This enables policy-agnostic programming: a paradigm permitting expressive privacy policies to be declared, independent of program logic. Although faceted execution prevents information leakage at runtime, it does not guarantee the absence of failure due to policy violations. By contrast with static mechanisms (such as security type systems), dynamic information-flow control permits arbitrarily expressive and dynamic privacy policies but imposes significant runtime overhead and delays discovery of any possible violations. In this paper, we present the two different abstract interpretations for faceted execution in the presence of first-class policies. We first present an abstraction which allows one to reason statically about the shape of facets at each program point. This abstraction is useful for statically proving the absence of runtime errors and eliminating runtime checks related to facets. Reasoning statically about the contents of faceted values, however, is complicated by the presence of first-class security labels, especially because abstract labels may conflate more than one runtime label. To address these issues, we also develop a more precise abstraction that relies on an analysis tracking singleton heap abstractions. We present an implementation of our coarse abstraction in Racket and demonstrate its performance on several sample programs. We conclude by showing how our precise domain can be used to verify information-flow properties. 
    more » « less
  3. Privilege separation is an effective technique to improve software security. However, past partitioning systems do not allow programmers to make quantitative tradeoffs between security and performance. In this paper, we describe our toolchain called PM. It can automatically find the optimal boundary in program partitioning. This is achieved by solving an integer-programming model that optimizes for a user-chosen metric while satisfying the remaining security and performance constraints on other metrics. We choose security metrics to reason about how well computed partitions enforce information flow control to: (1) protect the program from low-integrity inputs or (2) prevent leakage of program secrets. As a result, functions in the sensitive module that fall on the optimal partition boundaries automatically identify where declassification is necessary. We used PM to experiment on a set of real-world programs to protect confidentiality and integrity; results show that, with moderate user guidance, PM can find partitions that have better balance between security and performance than partitions found by a previous tool that requires manual declassification. 
    more » « less
  4. Noninterference is a popular semantic security condition because it offers strong end-to-end guarantees, it is inherently compositional, and it can be enforced using a simple security type system. Unfortunately, it is too restrictive for real systems. Mechanisms for downgrading information are needed to capture real-world security requirements, but downgrading eliminates the strong compositional security guarantees of noninterference. We introduce _nonmalleable information flow_, a new formal security condition that generalizes noninterference to permit controlled downgrading of both confidentiality and integrity. While previous work on robust declassification prevents adversaries from exploiting the downgrading of confidentiality, our key insight is _transparent endorsement_, a mechanism for downgrading integrity while defending against adversarial exploitation. Robust declassification appeared to break the duality of confidentiality and integrity by making confidentiality depend on integrity, but transparent endorsement makes integrity depend on confidentiality, restoring this duality. We show how to extend a security-typed programming language with transparent endorsement and prove that this static type system enforces nonmalleable information flow, a new security property that subsumes robust declassification and transparent endorsement. Finally, we describe an implementation of this type system in the context of Flame, a flow-limited authorization plugin for the Glasgow Haskell Compiler. 
    more » « less
  5. Non-interference is a popular way to enforce confidentiality of sensitive data. However, declassification of sensitive information is often needed in realistic applications but breaks non-interference. We present ANOSY, an approximate knowledge synthesizer for quantitative declassification policies. ANOSY uses refinement types to automatically construct machine checked over- and under-approximations of attacker knowledge for boolean queries on multi-integer secrets. It also provides an AnosyT monad to track the attacker knowledge over multiple declassification queries and checks for violations against user-specified policies in information flow control applications. We implement a prototype of ANOSY and show that it is precise and permissive: up to 14 declassification queries are permitted before a policy violation occurs using the powerset of intervals domain. 
    more » « less