skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM to 12:00 PM ET on Tuesday, March 25 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Scheffler, Sarah"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. There are two strategic and longstanding questions about cyber risk that organizations largely have been unable to answer: What is an organization's estimated risk exposure and how does its security compare with peers? Answering both requires industry-wide data on security posture, incidents, and losses that, until recently, have been too sensitive for organizations to share. Now, privacy enhancing technologies (PETs) such as cryptographic computing can enable the secure computation of aggregate cyber risk metrics from a peer group of organizations while leaving sensitive input data undisclosed. As these new aggregate data become available, analysts need ways to integrate them into cyber risk models that can produce more reliable risk assessments and allow comparison to a peer group. This paper proposes a new framework for benchmarking cyber posture against peers and estimating cyber risk within specific economic sectors using the new variables emerging from secure computations. We introduce a new top-line variable called the Defense Gap Index representing the weighted security gap between an organization and its peers that can be used to forecast an organization's own security risk based on historical industry data. We apply this approach in a specific sector using data collected from 25 large firms, in partnership with an industry ISAO, to build an industry risk model and provide tools back to participants to estimate their own risk exposure and privately compare their security posture with their peers. 
    more » « less
    Free, publicly-accessible full text available August 22, 2025
  2. If a court knows that a respondent knows the password to a device, can the court compel the respondent to enter that password into the device? In this work, we propose a new approach to the foregone conclusion doctrine from Fisher v. U.S. that governs the answer to this question. The Holy Grail of this line of work would be a framework for reasoning about whether the testimony implicit in any action is already known to the government. In this paper we attempt something narrower. We introduce a framework for specifying actions for which all implicit testimony is, constructively, a foregone conclusion. Our approach is centered around placing the burden of proof on the government to demonstrate that it is not “rely[ing] on the truthtelling” of the respondent. Building on original legal analysis and using precise computer science formalisms, we propose demonstrability as a new central concept for describing compelled acts. We additionally provide a language for whether a compelled action meaningfully entails the respondent to perform in a manner that is “as good as” the government’s desired goal. Then, we apply our definitions to analyze the compellability of several cryptographic primitives including decryption, multifactor authentication, commitment schemes, and hash functions. In particular, our framework reaches a novel conclusion about compelled decryption in the setting that the encryption scheme is deniable: the government can compel but the respondent is free to use any password of her choice. 
    more » « less
  3. A central notion in U.S. copyright law is judging the substantial similarity between an original and an (allegedly) derived work. Capturing this notion has proven elusive, and the many approaches offered by case law and legal scholarship are often ill-defined, contradictory, or internally-inconsistent. This work suggests that key parts of the substantial-similarity puzzle are amendable to modeling inspired by theoretical computer science. Our proposed framework quantitatively evaluates how much "novelty" is needed to produce the derived work with access to the original work, versus reproducing it without access to the copyrighted elements of the original work. "Novelty" is captured by a computational notion of description length, in the spirit of Kolmogorov-Levin complexity, which is robust to mechanical transformations and availability of contextual information. This results in an actionable framework that could be used by courts as an aid for deciding substantial similarity. We evaluate it on several pivotal cases in copyright law and observe that the results are consistent with the rulings, and are philosophically aligned with the abstraction-filtration-comparison test of Altai. 
    more » « less
  4. null (Ed.)
    The information security community has devoted substantial effort to the design, development, and universal deployment of strong encryption schemes that withstand search and seizure by computationally-powerful nation-state adversaries. In response, governments are increasingly turning to a different tactic: issuing subpoenas that compel people to decrypt devices themselves, under the penalty of contempt of court if they do not comply. Compelled decryption subpoenas sidestep questions around government search powers that have dominated the Crypto Wars and instead touch upon a different (and still unsettled) area of the law: how encryption relates to a person's right to silence and against self-incrimination. In this work, we provide a rigorous, composable definition of a critical piece of the law that determines whether cryptosystems are vulnerable to government compelled disclosure in the United States. We justify our definition by showing that it is consistent with prior court cases. We prove that decryption is often not compellable by the government under our definition. Conversely, we show that many techniques that bolster security overall can leave one more vulnerable to compelled disclosure. As a result, we initiate the study of protecting cryptographic protocols against the threat of future compelled disclosure. We find that secure multi-party computation is particularly vulnerable to this threat, and we design and implement new schemes that are provably resilient in the face of government compelled disclosure. We believe this work should influence the design of future cryptographic primitives and contribute toward the legal debates over the constitutionality of compelled decryption. 
    more » « less
  5. null (Ed.)
    We provide a modified version of the Ligero sublinear zero knowledge proof system for arithmetic circuits provided by Ames et. al. (CCS ‘17). Our modification "BooLigero" tailors Ligero for use in Boolean circuits to achieve a significant improvement in proof size. Although the original Ligero system could be used for Boolean circuits, Ligero generally requires allocating an entire field element to represent a single bit on a wire in a Boolean circuit. In contrast, our system performs operations over words of bits, allowing a proof size savings of between O(log(|F|)^1/4) and O(log(|F|)^1/2) compared to Ligero, where F is the field that leads to the optimal proof size in original Ligero. We achieve improvements in proof size of approximately 1.1-1.6x for SHA-2 and 1.7-2.8x for SHA-3. In addition to checking constraints of standard Boolean operations such as AND, XOR, and NOT over words, BooLigero also supports several other constraints such as multiplication in GF(2^w), bit masking, bit rearrangement within and across words, and bitwise outer product. Like Ligero, construction requires no trusted setup and no computational assumptions, which is ideal for blockchain applications. It is plausibly post-quantum secure in the standard model. Furthermore, it is public-coin, perfect honest-verifier zero knowledge, and can be made non-interactive in the random oracle model using the Fiat-Shamir transform. 
    more » « less
  6. Sako, Kazue; Tippenhauer, Nils Ole (Ed.)
    In this work, we present a zero knowledge argument for general arithmetic circuits that is public-coin and constant rounds, so it can be made non-interactive and publicly verifiable with the Fiat-Shamir heuristic. The construction is based on the MPC-in-the-head paradigm, in which the prover jointly emulates all MPC protocol participants and can provide advice in the form of Beaver triples whose accuracy must be checked by the verifier. Our construction follows the Beaver triple sacrificing approach used by Baum and Nof [PKC 2020]. Our improvements reduce the communication per multiplication gate from 4 to 2 field elements, matching the performance of the cut-and-choose approach taken by Katz, Kolesnikov, and Wang [CCS 2018] and with lower additive overhead for some parameter settings. We implement our protocol and analyze its cost on Picnic-style post-quantum digital signatures based on the AES family of circuits. 
    more » « less
  7. Recent developments in online tracking make it harder for individuals to detect and block trackers. Some sites have deployed indirect tracking methods, which attempt to uniquely identify a device by asking the browser to perform a seemingly-unrelated task. One type of indirect tracking, Canvas fingerprinting, causes the browser to render a graphic recording rendering statistics as a unique identifier. In this work, we observe how indirect device fingerprinting methods are disclosed in privacy policies, and consider whether the disclosures are sufficient to enable website visitors to block the tracking methods. We compare these disclosures to the disclosure of direct fingerprinting methods on the same websites. Our case study analyzes one indirect fingerprinting technique, Canvas fingerprinting. We use an existing automated detector of this fingerprinting technique to conservatively detect its use on Alexa Top 500 websites that cater to United States consumers, and we examine the privacy policies of the resulting 28 websites. Disclosures of indirect fingerprinting vary in specificity. None described the specific methods with enough granularity to know the website used Canvas fingerprinting. Conversely, many sites did provide enough detail about usage of direct fingerprinting methods to allow a website visitor to reliably detect and block those techniques. We conclude that indirect fingerprinting methods are often difficult to detect and are not identified with specificity in privacy policies. This makes indirect fingerprinting more difficult to block, and therefore risks disturbing the tentative armistice between individuals and websites currently in place for direct fingerprinting. This paper illustrates differences in fingerprinting approaches, and explains why technologists, technology lawyers, and policymakers need to appreciate the challenges of indirect fingerprinting. 
    more » « less