This paper introduces protocols for authenticated private information retrieval. These schemes enable a client to fetch a record from a remote database server such that (a) the server does not learn which record the client reads, and (b) the client either obtains the "authentic" record or detects server misbehavior and safely aborts. Both properties are crucial for many applications. Standard private-information-retrieval schemes either do not ensure this form of output authenticity, or they require multiple database replicas with an honest majority. In contrast, we offer multi-server schemes that protect security as long as at least one server is honest. Moreover, if the client can obtain a short digest of the database out of band, then our schemes require only a single server. Performing an authenticated private PGP-public-key lookup on an OpenPGP key server's database of 3.5 million keys (3 GiB), using two non-colluding servers, takes under 1.2 core-seconds of computation, essentially matching the time taken by unauthenticated private information retrieval. Our authenticated single-server schemes are 30-100 more costly than state-of-the-art unauthenticated single-server schemes, though they achieve incomparably stronger integrity properties.
more »
« less
Aegean: replication beyond the client-server model
This paper presents Aegean, a new approach that allows fault-tolerant replication to be implemented beyond the confines of the client-server model. In today’s computing, where services are rarely standalone, traditional replication protocols such as Primary-Backup, Paxos, and PBFT are not directly applicable, as they were designed for the client-server model. When services interact, these protocols run into a number of problems, affecting both correctness and performance. In this paper, we rethink the design of replication protocols in the presence of interactions between services and introduce new techniques that accommodate such interactions safely and efficiently. Our evaluation shows that a prototype implementation of Aegean not only ensures correctness in the presence of service interactions, but can further improve throughput by an order of magnitude.
more »
« less
- Award ID(s):
- 1814507
- PAR ID:
- 10123582
- Date Published:
- Journal Name:
- Proceedings of the 27th ACM Symposium on Operating Systems Principles
- Page Range / eLocation ID:
- 385 to 398
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo focuses on the multi-round setting found in federated learning in which many consecutive summations (averages) of model weights are performed to derive a good model. Previous protocols, such as Bell et al. (CCS ’20), have been designed for a single round and are adapted to the federated learning setting by repeating the protocol multiple times. Flamingo eliminates the need for the per-round setup of previous protocols, and has a new lightweight dropout resilience protocol to ensure that if clients leave in the middle of a sum the server can still obtain a meaningful result. Furthermore, Flamingo introduces a new way to locally choose the so-called client neighborhood introduced by Bell et al. These techniques help Flamingo reduce the number of interactions between clients and the server, resulting in a significant reduction in the end-to-end runtime for a full training session over prior work.We implement and evaluate Flamingo and show that it can securely train a neural network on the (Extended) MNIST and CIFAR-100 datasets, and the model converges without a loss in accuracy, compared to a non-private federated learning system.more » « less
-
Dunkelman, D.; Dziembowski, S. (Ed.)We construct new private-information-retrieval protocols in the single-server setting. Our schemes allow a client to privately fetch a sequence of database records from a server, while the server answers each query in average time sublinear in the database size. Specifically, we introduce the first single-server private-information-retrieval schemes that have sublinear amortized server time, require sublinear additional storage, and allow the client to make her queries adaptively. Our protocols rely only on standard cryptographic assumptions (decision Diffie-Hellman, quadratic residuosity, learning with errors, etc.). They work by having the client first fetch a small “hint” about the database contents from the server. Generating this hint requires server time linear in the database size. Thereafter, the client can use the hint to make a bounded number of adaptive queries to the server, which the server answers in sublinear time—yielding sublinear amortized cost. Finally, we give lower bounds proving that our most efficient scheme is optimal with respect to the trade-off it achieves between server online time and client storage.more » « less
-
We present the first formal-methods analysis of the Session Binding Proxy (SBP) protocol, which protects a vulnerable system by wrapping it and introducing a reverse proxy between the system and its clients. SBP mitigates thefts of authentication cookies by cryptographically binding the authentication cookie---issued by the server to the client---to an underlying Transport Layer Security (TLS) channel using the channel's master secret and a secret key known only by the proxy. An adversary who steals a bound cookie cannot reuse this cookie to create malicious requests on a separate connection because the cookie's channel binding will not match the adversary's channel. SBP seeks to achieve this goal without modifications to the client or the server software, rendering the client and server ``oblivious protocol participants'' that are not aware of the SBP session. Our analysis verifies that the original SBP design mitigates cookie stealing under the client's cryptographic assumptions but fails to authenticate the client to the proxy. Resulting from two issues, the proxy has no assurance that it shares a session context with a legitimate client: SBP assumes an older flawed version of TLS (1.2), and SBP relies on legacy server usernames and passwords to authenticate clients. Due to these issues, there is no guarantee of cookie-stealing resistance from the proxy's cryptographic perspective. Using the Cryptographic Protocol Shapes Analyzer (CPSA), we model and analyze the original SBP and three variations in the Dolev-Yao network intruder model. Our models differ in the version of TLS they use: 1.2 (original SBP), 1.2 with mutual authentication, 1.3, and {\it 1.3 with mutual authentication (mTLS-1.3)}. For comparison, we also analyze a model of the baseline scenario without SBP. We separately analyze each of our SBP models from two perspectives: client and proxy. In each SBP model, the client has assurance that the cookie is valid only for the client's legitimate session. Only in mTLS-1.3 does the proxy have assurance that it communicates with a legitimate client and that the client's cookie is valid. We formalize these results by stating and proving, or disproving, security goals for each model. SBP is useful because it provides a practical solution to the important challenge of protecting flawed legacy systems that cannot be patched. Our analysis of this obscure protocol sheds insight into the properties necessary for wrapper protocols to resist a Dolev-Yao adversary. When engineering wrapper protocols, designers must carefully consider authentication, freshness, and requirements of cryptographic bindings such as channel bindings. Our work exposes strengths and limitations of wrapper protocols and TLS channel bindings.more » « less
-
Reverse proxy servers play a critical role in optimizing Internet services, offering benefits ranging from load balancing to Denial of Service (DoS) protection. A known shortcoming of such proxies is that the backend server becomes oblivious to the IP address of the client who initiated the connection since all requests are forwarded by the proxy server. For HTTP, this issue is trivially solved by the X-Forwarded-For header, which allows the proxy server to pass to the backend server the IP address of the client that originated the request. Unfortunately, no such equivalent exists for many other protocols. To solve this issue, HAProxy created the PROXY protocol, which communicates client information from a proxy server to a backend server at a lower level in the network stack (Layer 4), making it protocol agnostic. In this work, we are the first to study the use of the PROXY protocol at Internet scale and investigate the security impact of its misconfigurations. We launched a measurement study on the full IPv4 address range and found that, over HTTP, more than 170,000 hosts accept PROXY protocol data from arbitrary sources. We demonstrate how to abuse this protocol to bypass onpath proxies (and their protections) and leak sensitive information from backend infrastructures. We discovered over 10,000 servers that are vulnerable to an access bypass, triggered by injecting a (spoofed) PROXY protocol header. Using this technique, we obtained access to over 500 internal servers providing control over IoT monitoring platforms and smart home automation devices, allowing us to, for example, regulate remote controlled window blinds or control security cameras and alarm systems. Beyond HTTP, we demonstrate how the PROXY protocol can be used to turn over 350 SMTP servers into open relays, enabling an attacker to send arbitrary emails from any email address. In sum, our study exposes how PROXY protocol misconfigurations lead to severe security issues that affect multiple protocols prominently used in the wild.more » « less
An official website of the United States government

