skip to main content


Search for: All records

Award ID contains: 1650589

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This Article develops a framework for both assessing and designing content moderation systems consistent with public values. It argues that moderation should not be understood as a single function, but as a set of subfunctions common to all content governance regimes. By identifying the particular values implicated by each of these subfunctions, it explores the appropriate ways the constituent tasks might best be allocated-specifically to which actors (public or private, human or technological) they might be assigned, and what constraints or processes might be required in their performance. This analysis can facilitate the evaluation and design of content moderation systems to ensure the capacity and competencies necessary for legitimate, distributed systems of content governance. Through a combination of methods, legal schemes delegate at least a portion of the responsibility for governing online expression to private actors. Sometimes, statutory schemes assign regulatory tasks explicitly. In others, this delegation often occurs implicitly, with little guidance as to how the treatment of content should be structured. In the law's shadow, online platforms are largely given free rein to configure the governance of expression. Legal scholarship has surfaced important concerns about the private sector's role in content governance. In response, private platforms engaged in content moderation have adopted structures that mimic public governance forms. Yet, we largely lack the means to measure whether these forms are substantive, effectively infusing public values into the content moderation process, or merely symbolic artifice designed to deflect much needed public scrutiny. This Article's proposed framework addresses that gap in two ways. First, the framework considers together all manner of legal regimes that induce platforms to engage in the function of content moderation. Second, it focuses on the shared set of specific tasks, or subfunctions, involved in the content moderation function across these regimes. Examining a broad range of content moderation regimes together highlights the existence of distinct common tasks and decision points that together constitute the content moderation function. Focusing on this shared set of subfunctions highlights the different values implicated by each and the way they can be "handed off' to human and technical actors to perform in different ways with varying normative and political implications. This Article identifies four key content moderation subfunctions: (1) definition of policies, (2) identification of potentially covered content, (3) application of policies to specific cases, and (4) resolution of those cases. Using these four subfunctions supports a rigorous analysis of how to leverage the capacities and competencies of government and private parties throughout the content moderation process. Such attention also highlights how the exercise of that power can be constrained-either by requiring the use of particular decision-making processes or through limits on the use of automation-in ways that further address normative concerns. Dissecting the allocation of subfunctions in various content moderation regimes reveals the distinct ethical and political questions that arise in alternate configurations. Specifically, it offers a way to think about four key questions: (1) what values are most at issue regarding each subfunction; (2) which activities might be more appropriate to delegate to particular public or private actors; (3) which constraints must be attached to the delegation of each subfunction; and (4) where can investments in shared content moderation infrastructures support relevant values? The functional framework thus provides a means for both evaluating the symbolic legal forms that firms have constructed in service of content moderation and for designing processes that better reflect public values. 
    more » « less
  2. null (Ed.)
    User experience (UX) professionals' attempts to address social values as a part of their work practice can overlap with tactics to contest, resist, or change the companies they work for. This paper studies tactics that take place in this overlap, where UX professionals try to re-shape the values embodied and promoted by their companies, in addition to the values embodied and promoted in the technical systems and products that their companies produce. Through interviews with UX professionals working at large U.S.-based technology companies and observations at UX meetup events, this paper identifies tactics used towards three goals: (1) creating space for UX expertise to address values; (2) making values visible and relevant to other organizational stakeholders; and (3) changing organizational processes and orientations towards values. This paper analyzes these as tactics of resistance: UX professionals seek to subvert or change existing practices and organizational structures towards more values-conscious ends. Yet, these tactics of resistance often rely on the dominant discourses and logics of the technology industry. The paper characterizes these as partial or "soft" tactics, but also argues that they nevertheless hold possibilities for enacting values-oriented changes. 
    more » « less
  3. null (Ed.)
    Multiple methods have been used to study how social values and ethics are implicated in technology design and use, including empirical qualitative studies of technologists’ work. Recently, more experimental approaches such as design fiction explore these themes through fictional worldbuilding. This paper combines these approaches by adapting design fictions as a form of memoing, a qualitative analysis technique. The paper uses design fiction memos to analyze and reflect on ethnographic interviews and observational data about how user experience (UX) professionals at large technology companies engage with values and ethical issues in their work. The design fictions help explore and articulate themes about the values work practices and relationships of power that UX professionals grapple with. Through these fictions, the paper contributes a case study showing how design fiction can be used for qualitative analysis, and provides insights into the role of organizational and power dynamics in UX professionals’ values work. 
    more » « less
  4. null (Ed.)
    This paper presents Timelines, a design activity to assist values advocates: people who help others recognize values and ethical concerns as relevant to technical practice. Rather than integrate seamlessly into existing design processes, Timelines aims to create a space for critical reflection and contestation among expert participants (such as technology researchers, practitioners, or students) and a values advocate facilitator to surface the importance and relevance of values and ethical concerns. The activity’s design is motivated by theoretical perspectives from design fiction, scenario planning, and value sensitive design. The activity helps participants surface discussion of broad societal-level changes related to a technology by creating stories from news headlines, and recognize a diversity of experiences situated in the everyday by creating social media posts from different viewpoints. We reflect on how decisions on the activity’s design and facilitation enables it to assist in values advocacy practices. 
    more » « less
  5. null (Ed.)
    At every level of government, officials contract for technical systems that employ machine learning-systems that perform tasks without using explicit instructions, relying on patterns and inference instead. These systems frequently displace discretion previously exercised by policymakers or individual front-end government employees with an opaque logic that bears no resemblance to the reasoning processes of agency personnel. However, because agencies acquire these systems through government procurement processes, they and the public have little input into-or even knowledge about-their design or how well that design aligns with public goals and values. This Article explains the ways that the decisions about goals, values, risk, and certainty, along with the elimination of case-by-case discretion, inherent in machine-learning system design create policies-not just once when they are designed, but over time as they adapt and change. When the adoption of these systems is governed by procurement, the policies they embed receive little or no agency or outside expertise beyond that provided by the vendor. Design decisions are left to private third-party developers. There is no public participation, no reasoned deliberation, and no factual record, which abdicates Government responsibility for policymaking. This Article then argues for a move from a procurement mindset to policymaking mindset. When policy decisions are made through system design, processes suitable for substantive administrative determinations should be used: processes that foster deliberation reflecting both technocratic demands for reason and rationality informed by expertise, and democratic demands for public participation and political accountability. Specifically, the Article proposes administrative law as the framework to guide the adoption of machine learning governance, describing specific ways that the policy choices embedded in machine learning system design fail the prohibition against arbitrary and capricious agency actions absent a reasoned decision-making process that both enlists the expertise necessary for reasoned deliberation about, and justification for, such choices, and makes visible the political choices being made. Finally, this Article sketches models for machine-learning adoption processes that satisfy the prohibition against arbitrary and capricious agency actions. It explores processes by which agencies might garner technical expertise and overcome problems of system opacity, satisfying administrative law's technocratic demand for reasoned expert deliberation. It further proposes both institutional and engineering design solutions to the challenge of policymaking opacity, offering process paradigms to ensure the "political visibility" required for public input and political oversight. In doing so, it also proposes the importance of using "contestable design"-design that exposes value-laden features and parameters and provides for iterative human involvement in system evolution and deployment. Together, these institutional and design approaches further both administrative law's technocratic and democratic mandates. 
    more » « less
  6. null (Ed.)
    In calls for privacy by design (PBD), regulators and privacy scholars have investigated the richness of the concept of "privacy." In contrast, "design" in HCI is comprised of rich and complex concepts and practices, but has received much less attention in the PBD context. Conducting a literature review of HCI publications discussing privacy and design, this paper articulates a set of dimensions along which design relates to privacy, including: the purpose of design, which actors do design work in these settings, and the envisioned benefciaries of design work. We suggest new roles for HCI and design in PBD research and practice: utilizing values-and critically-oriented design approaches to foreground social values and help defne privacy problem spaces.We argue such approaches, in addition to current "design to solve privacy problems" eforts, are essential to the full realization of PBD, while noting the politics involved when choosing design to address privacy. 
    more » « less
  7. Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses. 
    more » « less
  8. According to the theory of contextual integrity (CI), privacy norms prescribe information flows with reference to five parameters — sender, recipient, subject, information type, and transmission principle. Because privacy is grasped contextually (e.g., health, education, civic life, etc.), the values of these parameters range over contextually meaningful ontologies — of information types (or topics) and actors (subjects, senders, and recipients), in contextually defined capacities. As an alternative to predominant approaches to privacy, which were ineffective against novel information practices enabled by IT, CI was able both to pinpoint sources of disruption and provide grounds for either accepting or rejecting them. Mounting challenges from a burgeoning array of networked, sensor-enabled devices (IoT) and data-ravenous machine learning systems, similar in form though magnified in scope, call for renewed attention to theory. This Article introduces the metaphor of a data (food) chain to capture the nature of these challenges. With motion up the chain, where data of higher order is inferred from lower-order data, the crucial question is whether privacy norms governing lower-order data are sufficient for the inferred higher-order data. While CI has a response to this question, a greater challenge comes from data primitives, such as digital impulses of mouse clicks, motion detectors, and bare GPS coordinates, because they appear to have no meaning. Absent a semantics, they escape CI’s privacy norms entirely. 
    more » « less