skip to main content


Title: Efficient Dynamic Proof of Retrievability for Cold Storage
Storage-as-a-service (STaaS) permits the client to outsource her data to the cloud, thereby reducing data management and maintenance costs. However, STaaS also brings significant data integrity and soundness concerns since the storage provider might not keep the client data intact and retrievable all the time (e.g., cost saving via deletions). Proof of Retrievability (PoR) can validate the integrity and retrievability of remote data effectively. This technique can be useful for regular audits to monitor data compromises, as well as to comply with standard data regulations. In particular, cold storage applications (e.g., MS Azure, Amazon Glacier) require regular and frequent audits with less frequent data modification. Yet, despite their merits, existing PoR techniques generally focus on other metrics (e.g., low storage, fast update, metadata privacy) but not audit efficiency (e.g., low audit time, small proof size). Hence, there is a need to develop new PoR techniques that achieve efficient data audit while preserving update and retrieval performance. In this paper, we propose Porla, a new PoR framework that permits efficient data audit, update, and retrieval functionalities simultaneously. Porla permits data audit in both private and public settings, each of which features asymptotically (and concretely) smaller audit-proof size and lower audit time than all the prior works while retaining the same asymptotic data update overhead. Porla achieves all these properties by composing erasure codes with verifiable computation techniques which, to our knowledge, is a new approach to PoR design. We address several challenges that arise in such a composition by creating a new homomorphic authenticated commitment scheme, which can be of independent interest. We fully implemented Porla and evaluated its performance on commodity cloud (i.e., Amazon EC2) under various settings. Experimental results demonstrated that Porla achieves two to four orders of magnitude smaller audit proof size with 4x–18000x lower audit time than all prior schemes in both private and public audit settings at the cost of only 2x–3x slower update.  more » « less
Award ID(s):
1917627
NSF-PAR ID:
10486511
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Internet Society
Date Published:
ISBN:
1-891562-83-5
Subject(s) / Keyword(s):
Authentication Cloud security Proof of Retrievability Cold storage
Format(s):
Medium: X
Location:
San Diego, CA, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Dunkelman, D. ; Dziembowski, S. (Ed.)
    We construct new private-information-retrieval protocols in the single-server setting. Our schemes allow a client to privately fetch a sequence of database records from a server, while the server answers each query in average time sublinear in the database size. Specifically, we introduce the first single-server private-information-retrieval schemes that have sublinear amortized server time, require sublinear additional storage, and allow the client to make her queries adaptively. Our protocols rely only on standard cryptographic assumptions (decision Diffie-Hellman, quadratic residuosity, learning with errors, etc.). They work by having the client first fetch a small “hint” about the database contents from the server. Generating this hint requires server time linear in the database size. Thereafter, the client can use the hint to make a bounded number of adaptive queries to the server, which the server answers in sublinear time—yielding sublinear amortized cost. Finally, we give lower bounds proving that our most efficient scheme is optimal with respect to the trade-off it achieves between server online time and client storage. 
    more » « less
  2. System logs are invaluable to forensic audits, but grow so large that in practice fine-grained logs are quickly discarded – if captured at all – preventing the real-world use of the provenance-based investigation techniques that have gained popularity in the literature. Encouragingly, forensically-informed methods for reducing the size of system logs are a subject of frequent study. Unfortunately, many of these techniques are designed for offline reduction in a central server, meaning that the up-front cost of log capture, storage, and transmission must still be paid at the endpoints. Moreover, to date these techniques exist as isolated (and, often, closed-source) implementations; there does not exist a comprehensive framework through which the combined benefits of multiple log reduction techniques can be enjoyed. In this work, we present FAuST, an audit daemon for performing streaming audit log reduction at system endpoints. After registering with a log source (e.g., via Linux Audit’s audisp utility), FAuST incrementally builds an in-memory provenance graph of recent system activity. During graph construction, log reduction techniques that can be applied to local subgraphs are invoked immediately using event callback handlers, while techniques meant for application on the global graph are invoked in periodic epochs. We evaluate FAuST, loaded with eight different log reduction modules from the literature, against the DARPA Transparent Computing datasets. Our experiments demonstrate the efficient performance of FAuST and identify certain subsets of reduction techniques that are synergistic with one another. Thus, FAuST dramatically simplifies the evaluation and deployment of log reduction techniques. 
    more » « less
  3. Private Information Retrieval (PIR) allows several clients to query a database held by one or more servers, such that the contents of their queries remain private. Prior PIR schemes have achieved sublinear communication and computation by leveraging computational assumptions, federating trust among many servers, relaxing security to permit differentially private leakage, refactoring effort into an offline stage to reduce online costs, or amortizing costs over a large batch of queries. In this work, we present an efficient PIR protocol that combines all of the above techniques to achieve constant amortized communication and computation complexity in the size of the database and constant client work. We leverage differentially private leakage in order to provide better trade-offs between privacy and efficiency. Our protocol achieves speedups up to and exceeding 10x in practical settings compared to state of the art PIR protocols, and can scale to batches with hundreds of millions of queries on cheap commodity AWS machines. Our protocol builds upon a new secret sharing scheme that is both incremental and non-malleable, which may be of interest to a wider audience. Our protocol provides security up to abort against malicious adversaries that can corrupt all but one party. 
    more » « less
  4. This paper introduces protocols for authenticated private information retrieval. These schemes enable a client to fetch a record from a remote database server such that (a) the server does not learn which record the client reads, and (b) the client either obtains the "authentic" record or detects server misbehavior and safely aborts. Both properties are crucial for many applications. Standard private-information-retrieval schemes either do not ensure this form of output authenticity, or they require multiple database replicas with an honest majority. In contrast, we offer multi-server schemes that protect security as long as at least one server is honest. Moreover, if the client can obtain a short digest of the database out of band, then our schemes require only a single server. Performing an authenticated private PGP-public-key lookup on an OpenPGP key server's database of 3.5 million keys (3 GiB), using two non-colluding servers, takes under 1.2 core-seconds of computation, essentially matching the time taken by unauthenticated private information retrieval. Our authenticated single-server schemes are 30-100 more costly than state-of-the-art unauthenticated single-server schemes, though they achieve incomparably stronger integrity properties. 
    more » « less
  5. —Searchable encryption has received a significant attention from the research community with various constructions being proposed, each achieving asymptotically optimal complexity for specific metrics (e.g., search, update). Despite their elegance, the recent attacks and deployment efforts have shown that the optimal asymptotic complexity might not always imply practical performance, especially if the application demands high privacy. In this article, we introduce a novel Dynamic Searchable Symmetric Encryption (DSSE) framework called Incidence Matrix (IM)-DSSE, which achieves a high level of privacy, efficient search/update, and low client storage with actual deployments on real cloud settings. We harness an incidence matrix along with two hash tables to create an encrypted index, on which both search and update operations can be performed effectively with minimal information leakage. This simple set of data structures surprisingly offers a high level of DSSE security while achieving practical performance. Specifically, IM-DSSE achieves forward-privacy, backward-privacy, and size-obliviousness simultaneously. We also create several DSSE variants, each offering different trade-offs that are suitable for different cloud applications and infrastructures. We fully implemented our framework and evaluated its performance on a real cloud system (Amazon EC2). We have released IM-DSSE as an open-source library for wide development and adaptation. 
    more » « less