skip to main content


Search for: All records

Award ID contains: 1704527

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Optimization is offered as an objective approach to resolving com- plex, real-world decisions involving uncertainty and conflicting interests. It drives business strategies as well as public policies and, increasingly, lies at the heart of sophisticated machine learning systems. A paradigm used to approach potentially high-stakes de- cisions, optimization relies on abstracting the real world to a set of decision(s), objective(s) and constraint(s). Drawing from the mod- eling process and a range of actual cases, this paper describes the normative choices and assumptions that are necessarily part of us- ing optimization. It then identifies six emergent problems that may be neglected: 1) Misspecified values can yield optimizations that omit certain imperatives altogether or incorporate them incorrectly as a constraint or as part of the objective, 2) Problematic decision boundaries can lead to faulty modularity assumptions and feedback loops, 3) Failing to account for multiple agents’ divergent goals and decisions can lead to policies that serve only certain narrow inter- ests, 4) Mislabeling and mismeasurement can introduce bias and imprecision, 5) Faulty use of relaxation and approximation methods, unaccompanied by formal characterizations and guarantees, can severely impede applicability, and 6) Treating optimization as a justification for action, without specifying the necessary contex- tual information, can lead to ethically dubious or faulty decisions. Suggestions are given to further understand and curb the harms that can arise when optimization is used wrongfully. 
    more » « less
    Free, publicly-accessible full text available June 12, 2024
  2. Privacy technologies support the provision of online services while protecting user privacy. Cryptography lies at the heart of many such technologies, creating remarkable possibilities in terms of functionality while offering robust guarantees of data confidential- ity. The cryptography literature and discourse often represent that these technologies eliminate the need to trust service providers, i.e., they enable users to protect their privacy even against untrusted service providers. Despite their apparent promise, privacy technolo- gies have seen limited adoption in practice, and the most successful ones have been implemented by the very service providers these technologies purportedly protect users from. The adoption of privacy technologies by supposedly adversarial service providers highlights a mismatch between traditional models of trust in cryptography and the trust relationships that underlie deployed technologies in practice. Yet this mismatch, while well known to the cryptography and privacy communities, remains rela- tively poorly documented and examined in the academic literature— let alone broader media. This paper aims to fill that gap. Firstly, we review how the deployment of cryptographic tech- nologies relies on a chain of trust relationships embedded in the modern computing ecosystem, from the development of software to the provision of online services, that is not fully captured by tra- ditional models of trust in cryptography. Secondly, we turn to two case studies—web search and encrypted messaging—to illustrate how, rather than removing trust in service providers, cryptographic privacy technologies shift trust to a broader community of secu- rity and privacy experts and others, which in turn enables service providers to implicitly build and reinforce their trust relationship with users. Finally, concluding that the trust models inherent in the traditional cryptographic paradigm elide certain key trust relation- ships underlying deployed cryptographic systems, we highlight the need for organizational, policy, and legal safeguards to address that mismatch, and suggest some directions for future work. 
    more » « less
  3. Crowdsourcing technologies rely on groups of people to input information that may be critical for decision-making. This work examines obfuscation in the context of reporting technologies. We show that widespread use of reporting platforms comes with unique security and privacy implications, and introduce a threat model and corresponding taxonomy to outline some of the many attack vectors in this space. We then perform an empirical analysis of a dataset of call logs from a controversial, real-world reporting hotline and identify coordinated obfuscation strategies that are intended to hinder the platform's legitimacy. We propose a variety of statistical measures to quantify the strength of this obfuscation strategy with respect to the structural and semantic characteristics of the reporting attacks in our dataset. 
    more » « less
  4. In 1996, philosopher Helen Nissenbaum issued a clarion call concerning the erosion of accountability in society due to the ubiquitous delegation of consequential functions to computerized systems. Using the conceptual framing of moral blame, Nissenbaum described four types of barriers to ac- countability that computerization presented: 1) “many hands,” the problem of attributing moral responsibility for outcomes caused by many moral ac- tors; 2) “bugs,” a way software developers might shrug off responsibility by suggesting software errors are unavoidable; 3) “computer as scapegoat,” shifting blame to computer systems as if they were moral actors; and 4) “ownership without liability,” a free pass to the tech industry to deny re- sponsibility for the software they produce. We revisit these four barriers in relation to the recent ascendance of data-driven algorithmic systems — technology often folded under the heading of machine learning (ML) or ar- tificial intelligence (AI) — to uncover the new challenges for accountability that these systems present. We then look ahead to how one might construct and justify a moral, relational framework for holding responsible parties ac- countable, and argue that the FAccT community is uniquely well-positioned to develop such a framework to weaken the four barriers. 
    more » « less
  5. We present an empirical study exploring how privacy influences the acceptance of vaccination certificate (VC) deployments across different realistic usage scenarios. The study employed the privacy framework of Contextual Integrity, which has been shown to be particularly effective in capturing people’s privacy expectations across different contexts. We use a vignette methodology, where we selectively manipulate salient contextual parameters to learn whether and how they affect people’s attitudes towards VCs. We surveyed 890 participants from a demographically-stratified sample of the US population to gauge the acceptance and overall attitudes towards possible VC deployments to enforce vaccination mandates and the different information flows VCs might entail. Analysis of results collected as part of this study is used to derive general normative observations about different possible VC practices and to provide guidance for the possible deployments of VCs in different contexts. 
    more » « less
  6. Many high-stakes policies can be modeled as a sequence of decisions along a pipeline. We are interested in auditing such pipelines for both Our empirical focus is on policy decisions made by the New efficiency and equity. Using a dataset of over 100,000 crowdsourced resident requests for po- life-tentially hazardous tree maintenance in New York City, we observe a sequence of city government decisions about whether to inspect and work on a reported incident. At each decision in the pipeline, we define parity definitions and tests to identify inefficient, inequitable treatment. Disparities in resource allocation and scheduling across census tracts are reported as preliminary results. 
    more » « less
  7. null (Ed.)
  8. Social media provides a critical communication platform for political figures, but also makes them easy targets for harassment. In this paper, we characterize users who adversarially interact with political figures on Twitter using mixed-method techniques. The analysis is based on a dataset of 400 thousand users' 1.2 million replies to 756 candidates for the U.S. House of Representatives in the two months leading up to the 2018 midterm elections. We show that among moderately active users, adversarial activity is associated with decreased centrality in the social graph and increased attention to candidates from the opposing party. When compared to users who are similarly active, highly adversarial users tend to engage in fewer supportive interactions with their own party's candidates and express negativity in their user profiles. Our results can inform the design of platform moderation mechanisms to support political figures countering online harassment. 
    more » « less
  9. null (Ed.)