To prevent applications from leaking users' private data to attackers, researchers have developed runtime information flow control (IFC) mechanisms. Most existing approaches are either based on taint tracking or multi-execution, and the same technique is used to protect the entire application. However, today's applications are typically composed of multiple components from heterogenous and unequally trusted sources. The goal of this paper is to develop a framework to enable the flexible composition of IFC enforcement mechanisms. More concretely, we focus on reactive programs, which is an abstract model for event-driven programs including web and mobile applications. We formalize the semantics of existing IFC enforcement mechanisms with well-defined interfaces for composition, define knowledge-based security guarantees that can precisely quantify the effect of implicit leaks from taint tracking, and prove sound all composed systems that we instantiate the framework with. We identify requirements for future enforcement mechanisms to be securely composed in our framework. Finally, we implement a prototype in OCaml and compare the effects of different compositions.
more »
« less
Knowledge-Based Security of Dynamic Secrets for Reactive Programs
Scripts on webpages could steal sensitive user data. Much work has been done, both in modeling and implementation, to enforce information flow control (IFC) of webpages to mitigate such attacks. It is common to model scripts running in an IFC mechanism as a reactive program. However, this model does not account for dynamic script behavior such as user action simulation, new DOM element generation, or new event handler registration, which could leak information. In this paper, we investigate how to secure sensitive user information, while maintaining the flexibility of declassification, even in the presence of active attackers-those who can perform the aforementioned actions. Our approach extends prior work on secure-multi-execution with stateful declassification by treating script-generated content specially to ensure that declassification policies cannot be manipulated by them. We use a knowledge-based progress-insensitive definition of security and prove that our enforcement mechanism is sound. We further prove that our enforcement mechanism is precise and has robust declassification (i.e. active attackers cannot learn more than their passive counterpart).
more »
« less
- Award ID(s):
- 1704542
- NSF-PAR ID:
- 10073112
- Date Published:
- Journal Name:
- 2018 IEEE 31st Computer Security Foundations Symposium (CSF)
- Page Range / eLocation ID:
- 175 to 188
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Attackers rely upon a vast array of tools for automating attacksagainst vulnerable servers and services. It is often the case thatwhen vulnerabilities are disclosed, scripts for detecting and exploit-ing them in tools such asNmapandMetasploitare released soonafter, leading to the immediate identification and compromise ofvulnerable systems. Honeypots, honeynets, tarpits, and other decep-tive techniques can be used to slow attackers down, however, such approaches have difficulty keeping up with the sheer number of vulnerabilities being discovered and attacking scripts that are being released. To address this issue, this paper describes an approach for applying concolic execution on attacking scripts in Nmap in order to automatically generate lightweight fake versions of the vulnerable services that can fool the scripts. By doing so in an automated and scalable manner, the approach can enable rapid deployment of custom honeyfarms that leverage the results of concolic execution to trick an attacker's script into returning a result chosen by the honeyfarm, making the script unreliable for the use by the attacker.more » « less
-
Formally verified correctness is one of the most desirable properties of software systems. But despite great progress made via interactive theorem provers, such as Coq, writing proof scripts for verification remains one of the most effort-intensive (and often prohibitively difficult) software development activities. Recent work has created tools that automatically synthesize proofs or proof scripts. For example, CoqHammer can prove 26.6% of theorems completely automatically by reasoning using precomputed facts, while TacTok and ASTactic, which use machine learning to model proof scripts and then perform biased search through the proof-script space, can prove 12.9% and 12.3% of the theorems, respectively. Further, these three tools are highly complementary; together, they can prove 30.4% of the theorems fully automatically. Our key insight is that control over the learning process can produce a diverse set of models, and that, due to the unique nature of proof synthesis (the existence of the theorem prover, an oracle that infallibly judges a proof's correctness), this diversity can significantly improve these tools' proving power. Accordingly, we develop Diva, which uses a diverse set of models with TacTok's and ASTactic's search mechanism to prove 21.7% of the theorems. That is, Diva proves 68% more theorems than TacTok and 77% more than ASTactic. Complementary to CoqHammer, Diva proves 781 theorems (27% added value) that Coq-Hammer does not, and 364 theorems no existing tool has proved automatically. Together with CoqHammer, Diva proves 33.8% of the theorems, the largest fraction to date. We explore nine dimensions for learning diverse models, and identify which dimensions lead to the most useful diversity. Further, we develop an optimization to speed up Diva's execution by 40X. Our study introduces a completely new idea for using diversity in machine learning to improve the power of state-of-the-art proof-script synthesis techniques, and empirically demonstrates that the improvement is significant on a dataset of 68K theorems from 122 open-source software projects.more » « less
-
Noninterference is a popular semantic security condition because it offers strong end-to-end guarantees, it is inherently compositional, and it can be enforced using a simple security type system. Unfortunately, it is too restrictive for real systems. Mechanisms for downgrading information are needed to capture real-world security requirements, but downgrading eliminates the strong compositional security guarantees of noninterference. We introduce _nonmalleable information flow_, a new formal security condition that generalizes noninterference to permit controlled downgrading of both confidentiality and integrity. While previous work on robust declassification prevents adversaries from exploiting the downgrading of confidentiality, our key insight is _transparent endorsement_, a mechanism for downgrading integrity while defending against adversarial exploitation. Robust declassification appeared to break the duality of confidentiality and integrity by making confidentiality depend on integrity, but transparent endorsement makes integrity depend on confidentiality, restoring this duality. We show how to extend a security-typed programming language with transparent endorsement and prove that this static type system enforces nonmalleable information flow, a new security property that subsumes robust declassification and transparent endorsement. Finally, we describe an implementation of this type system in the context of Flame, a flow-limited authorization plugin for the Glasgow Haskell Compiler.more » « less
-
Non-interference is a popular way to enforce confidentiality of sensitive data. However, declassification of sensitive information is often needed in realistic applications but breaks non-interference. We present ANOSY, an approximate knowledge synthesizer for quantitative declassification policies. ANOSY uses refinement types to automatically construct machine checked over- and under-approximations of attacker knowledge for boolean queries on multi-integer secrets. It also provides an AnosyT monad to track the attacker knowledge over multiple declassification queries and checks for violations against user-specified policies in information flow control applications. We implement a prototype of ANOSY and show that it is precise and permissive: up to 14 declassification queries are permitted before a policy violation occurs using the powerset of intervals domain.more » « less