Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
This Article develops a framework for both assessing and designing content moderation systems consistent with public values. It argues that moderation should not be understood as a single function, but as a set of subfunctions common to all content governance regimes. By identifying the particular values implicated by each of these subfunctions, it explores the appropriate ways the constituent tasks might best be allocated-specifically to which actors (public or private, human or technological) they might be assigned, and what constraints or processes might be required in their performance. This analysis can facilitate the evaluation and design of content moderation systems to ensure the capacity and competencies necessary for legitimate, distributed systems of content governance. Through a combination of methods, legal schemes delegate at least a portion of the responsibility for governing online expression to private actors. Sometimes, statutory schemes assign regulatory tasks explicitly. In others, this delegation often occurs implicitly, with little guidance as to how the treatment of content should be structured. In the law's shadow, online platforms are largely given free rein to configure the governance of expression. Legal scholarship has surfaced important concerns about the private sector's role in content governance. In response, private platforms engaged inmore »Free, publicly-accessible full text available October 24, 2023
-
Free, publicly-accessible full text available September 29, 2023
-
User experience (UX) professionals' attempts to address social values as a part of their work practice can overlap with tactics to contest, resist, or change the companies they work for. This paper studies tactics that take place in this overlap, where UX professionals try to re-shape the values embodied and promoted by their companies, in addition to the values embodied and promoted in the technical systems and products that their companies produce. Through interviews with UX professionals working at large U.S.-based technology companies and observations at UX meetup events, this paper identifies tactics used towards three goals: (1) creating space for UX expertise to address values; (2) making values visible and relevant to other organizational stakeholders; and (3) changing organizational processes and orientations towards values. This paper analyzes these as tactics of resistance: UX professionals seek to subvert or change existing practices and organizational structures towards more values-conscious ends. Yet, these tactics of resistance often rely on the dominant discourses and logics of the technology industry. The paper characterizes these as partial or "soft" tactics, but also argues that they nevertheless hold possibilities for enacting values-oriented changes.
-
Multiple methods have been used to study how social values and ethics are implicated in technology design and use, including empirical qualitative studies of technologists’ work. Recently, more experimental approaches such as design fiction explore these themes through fictional worldbuilding. This paper combines these approaches by adapting design fictions as a form of memoing, a qualitative analysis technique. The paper uses design fiction memos to analyze and reflect on ethnographic interviews and observational data about how user experience (UX) professionals at large technology companies engage with values and ethical issues in their work. The design fictions help explore and articulate themes about the values work practices and relationships of power that UX professionals grapple with. Through these fictions, the paper contributes a case study showing how design fiction can be used for qualitative analysis, and provides insights into the role of organizational and power dynamics in UX professionals’ values work.
-
This paper presents Timelines, a design activity to assist values advocates: people who help others recognize values and ethical concerns as relevant to technical practice. Rather than integrate seamlessly into existing design processes, Timelines aims to create a space for critical reflection and contestation among expert participants (such as technology researchers, practitioners, or students) and a values advocate facilitator to surface the importance and relevance of values and ethical concerns. The activity’s design is motivated by theoretical perspectives from design fiction, scenario planning, and value sensitive design. The activity helps participants surface discussion of broad societal-level changes related to a technology by creating stories from news headlines, and recognize a diversity of experiences situated in the everyday by creating social media posts from different viewpoints. We reflect on how decisions on the activity’s design and facilitation enables it to assist in values advocacy practices.
-
At every level of government, officials contract for technical systems that employ machine learning-systems that perform tasks without using explicit instructions, relying on patterns and inference instead. These systems frequently displace discretion previously exercised by policymakers or individual front-end government employees with an opaque logic that bears no resemblance to the reasoning processes of agency personnel. However, because agencies acquire these systems through government procurement processes, they and the public have little input into-or even knowledge about-their design or how well that design aligns with public goals and values. This Article explains the ways that the decisions about goals, values, risk, and certainty, along with the elimination of case-by-case discretion, inherent in machine-learning system design create policies-not just once when they are designed, but over time as they adapt and change. When the adoption of these systems is governed by procurement, the policies they embed receive little or no agency or outside expertise beyond that provided by the vendor. Design decisions are left to private third-party developers. There is no public participation, no reasoned deliberation, and no factual record, which abdicates Government responsibility for policymaking. This Article then argues for a move from a procurement mindset to policymaking mindset. Whenmore »
-
In calls for privacy by design (PBD), regulators and privacy scholars have investigated the richness of the concept of "privacy." In contrast, "design" in HCI is comprised of rich and complex concepts and practices, but has received much less attention in the PBD context. Conducting a literature review of HCI publications discussing privacy and design, this paper articulates a set of dimensions along which design relates to privacy, including: the purpose of design, which actors do design work in these settings, and the envisioned benefciaries of design work. We suggest new roles for HCI and design in PBD research and practice: utilizing values-and critically-oriented design approaches to foreground social values and help defne privacy problem spaces.We argue such approaches, in addition to current "design to solve privacy problems" eforts, are essential to the full realization of PBD, while noting the politics involved when choosing design to address privacy.
-
Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.
-
According to the theory of contextual integrity (CI), privacy norms prescribe information flows with reference to five parameters — sender, recipient, subject, information type, and transmission principle. Because privacy is grasped contextually (e.g., health, education, civic life, etc.), the values of these parameters range over contextually meaningful ontologies — of information types (or topics) and actors (subjects, senders, and recipients), in contextually defined capacities. As an alternative to predominant approaches to privacy, which were ineffective against novel information practices enabled by IT, CI was able both to pinpoint sources of disruption and provide grounds for either accepting or rejecting them. Mounting challenges from a burgeoning array of networked, sensor-enabled devices (IoT) and data-ravenous machine learning systems, similar in form though magnified in scope, call for renewed attention to theory. This Article introduces the metaphor of a data (food) chain to capture the nature of these challenges. With motion up the chain, where data of higher order is inferred from lower-order data, the crucial question is whether privacy norms governing lower-order data are sufficient for the inferred higher-order data. While CI has a response to this question, a greater challenge comes from data primitives, such as digital impulses of mouse clicks,more »