skip to main content


Title: Automated decision support technologies and the legal profession
A quiet revolution is afoot in the field of law. Technical systems employing algorithms are shaping and displacing professional decision making, and they are disrupting and restructuring relationships between law firms, lawyers, and clients. Decision-support systems marketed to legal professionals to support e-discovery—generally referred to as “technology assisted review” (TAR)—increasingly rely on “predictive coding”: machine-learning techniques to classify and predict which of the voluminous electronic documents subject to litigation should be withheld or produced to the opposing side. These systems and the companies offering them are reshaping relationships between lawyers and clients, introducing new kinds of professionals into legal practice, altering the discovery process, and shaping how lawyers construct knowledge about their cases and professional obligations. In the midst of these shifting relationships—and the ways in which these systems are shaping the construction and presentation of knowledge—lawyers are grappling with their professional obligations, ethical duties, and what it means for the future of legal practice. Through in-depth, semi-structured interviews of experts in the e-discovery technology space—the technology company representatives who develop and sell such systems to law firms and the legal professionals who decide whether and how to use them in practice—we shed light on the organizational structures, professional rules and norms, and technical system properties that are shaping and being reshaped by predictive coding systems. Our findings show that AI-supported decision systems such as these are reconfiguring professional work practices. In particular, they highlight concerns about potential loss of professional agency and skill, limited understanding and thereby both over- and under reliance on decision-support systems, and confusion about responsibility and accountability as new kinds of technical professionals and technologies are brought into legal practice. The introduction of predictive coding systems and the new professional and organizational arrangements they are ushering into legal practice compound general concerns over the opacity of technical systems with specific concerns about encroachments on the construction of expert knowledge, liability frameworks, and the potential (mis)alignment of machine reasoning with professional logic and ethics. Based on our findings, we conclude that predictive coding tools—and likely other algorithmic systems lawyers use to construct knowledge and reason about legal practice— challenge the current model for evaluating whether and how tools are appropriate for legal practice. As tools become both more complex and more consequential, it is unreasonable to rely solely on legal professionals—judges, law firms, and lawyers—to determine which technologies are appropriate for use. The legal professionals we interviewed report relying on the evaluation and judgment of a range of new technical experts within law firms and, increasingly, third-party vendors and their technical experts. This system for choosing technical systems upon which lawyers rely to make professional decisions—e.g., whether documents are responsive, or whether the standard of proportionality has been met—is no longer sufficient. As the tools of medicine are reviewed by appropriate experts before they are put out for consideration and adoption by medical professionals, we argue that the legal profession must develop new processes for determining which algorithmic tools are fit to support lawyers’ decision making. Relatedly, because predictive coding systems are used to produce lawyers’ professional judgment, we argue they must be designed for contestability— providing greater transparency, interaction, and configurability around embedded choices to ensure decisions about how to embed core professional judgments, such as relevance and proportionality, remain salient and demand engagement from lawyers, not just their technical experts.  more » « less
Award ID(s):
1835261
NSF-PAR ID:
10214814
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Berkeley technology law journal
Volume:
34
Issue:
3
ISSN:
2380-4742
Page Range / eLocation ID:
853-890
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This Article develops a framework for both assessing and designing content moderation systems consistent with public values. It argues that moderation should not be understood as a single function, but as a set of subfunctions common to all content governance regimes. By identifying the particular values implicated by each of these subfunctions, it explores the appropriate ways the constituent tasks might best be allocated-specifically to which actors (public or private, human or technological) they might be assigned, and what constraints or processes might be required in their performance. This analysis can facilitate the evaluation and design of content moderation systems to ensure the capacity and competencies necessary for legitimate, distributed systems of content governance. Through a combination of methods, legal schemes delegate at least a portion of the responsibility for governing online expression to private actors. Sometimes, statutory schemes assign regulatory tasks explicitly. In others, this delegation often occurs implicitly, with little guidance as to how the treatment of content should be structured. In the law's shadow, online platforms are largely given free rein to configure the governance of expression. Legal scholarship has surfaced important concerns about the private sector's role in content governance. In response, private platforms engaged in content moderation have adopted structures that mimic public governance forms. Yet, we largely lack the means to measure whether these forms are substantive, effectively infusing public values into the content moderation process, or merely symbolic artifice designed to deflect much needed public scrutiny. This Article's proposed framework addresses that gap in two ways. First, the framework considers together all manner of legal regimes that induce platforms to engage in the function of content moderation. Second, it focuses on the shared set of specific tasks, or subfunctions, involved in the content moderation function across these regimes. Examining a broad range of content moderation regimes together highlights the existence of distinct common tasks and decision points that together constitute the content moderation function. Focusing on this shared set of subfunctions highlights the different values implicated by each and the way they can be "handed off' to human and technical actors to perform in different ways with varying normative and political implications. This Article identifies four key content moderation subfunctions: (1) definition of policies, (2) identification of potentially covered content, (3) application of policies to specific cases, and (4) resolution of those cases. Using these four subfunctions supports a rigorous analysis of how to leverage the capacities and competencies of government and private parties throughout the content moderation process. Such attention also highlights how the exercise of that power can be constrained-either by requiring the use of particular decision-making processes or through limits on the use of automation-in ways that further address normative concerns. Dissecting the allocation of subfunctions in various content moderation regimes reveals the distinct ethical and political questions that arise in alternate configurations. Specifically, it offers a way to think about four key questions: (1) what values are most at issue regarding each subfunction; (2) which activities might be more appropriate to delegate to particular public or private actors; (3) which constraints must be attached to the delegation of each subfunction; and (4) where can investments in shared content moderation infrastructures support relevant values? The functional framework thus provides a means for both evaluating the symbolic legal forms that firms have constructed in service of content moderation and for designing processes that better reflect public values. 
    more » « less
  2. This paper reflects on the significance of ABET’s “maverick evaluators” and what it says about the limits of accreditation as a mode of governance in US engineering education. The US system of engineering education operates as a highly complex system, where the diversity of the system is an asset to robust knowledge production and the production of a varied workforce. ABET Inc., the principal accreditation agency for engineering degree programs in the US, attempts to uphold a set of professional standards for engineering education using a voluntary, peer-based system of evaluation. Key to their approach is a volunteer army of trained program evaluators (PEVs) assigned by the engineering professional societies, who serve as the frontline workers responsible for auditing the content, learning outcomes, and continuous improvement processes utilized by every engineering degree program accredited by ABET. We take a look specifically at those who become labeled “maverick evaluators” in order to better understand how this system functions, and to understand its limitations as a form of governance in maintaining educational quality and appropriate professional standards within engineering education. ABET was established in 1932 as the Engineers’ Council for Professional Development (ECPD). The Cold War consensus around the engineering sciences led to a more quantitative system of accreditation first implemented in 1956. However, the decline of the Cold War and rising concerns about national competitiveness prompted ABET to shift to a more neoliberal model of accountability built around outcomes assessment and modeled after total quality management / continuous process improvement (TQM/CPI) processes that nominally gave PEVs greater discretion in evaluating engineering degree programs. However, conflicts over how the PEVs exercised judgment points to conservative aspects in the structure of the ABET organization, and within the engineering profession at large. This paper and the phenomena we describe here is one part of a broader, interview-based study of higher education governance and engineering educational reform within the United States. We have conducted over 300 interviews at more than 40 different academic institutions and professional organizations, where ABET and institutional responses to the reforms associated with “EC 2000,” which brought outcomes assessment to engineering education, are extensively discussed. The phenomenon of so-called “maverick evaluators” reveal the divergent professional interests that remain embedded within ABET and the engineering profession at large. Those associated with Civil and Environmental Engineering, and to a lesser extent Mechanical Engineering continue to push for higher standards of accreditation grounded in a stronger vision for their professions. While the phenomenon is complex and more subtle than we can summarize in an abstract, “maverick evaluators” emerged as a label for PEVs who interpreted their role, including determinations about whether certain content “appropriate to the field of study,” utilizing professional standards that lay outside of the consensus position held by the majority of the member of the Engineering Accreditation Commission. This, conjoined with the engineers’ epistemic aversion to uncertainty and concerns about the legal liability of their decisions, resulted in a more narrow interpretation of key accreditation criteria. The organization then designed and used a “due-process” reviews process to discipline identified shortcomings in order to limit divergent interpretations. The net result is that the bureaucratic process ABET built to obtain uniformity in accreditation outcomes, simultaneously blunts the organization’s capacity to support varied interpretations of professional standards at the program level. The apparatus has also contributed to ABET’s reputation as an organization focused on minimum standards, as opposed to one that functions as an effective driver for further change in engineering education. 
    more » « less
  3. The rise of automated text processing systems has led to the development of tools designed for a wide variety of application domains. These technologies are often developed to support non-technical users such as domain experts and are often developed in isolation of the tools primary user. While such developments are exciting, less attention has been paid to domain experts’ expectations about the values embedded in these automated systems. As a step toward addressing that gap, we examined values expectations of journalists and legal experts. Both these domains involve extensive text processing and place high importance on values in professional practice. We engaged participants from two non-profit organizations in two separate co-speculation design workshops centered around several speculative automated text processing systems. This study makes three interrelated contributions. First, we provide a detailed investigation of domain experts’ values expectations around future NLP systems. Second, the speculative design fiction concepts, which we specifically crafted for these investigative journalists and legal experts, illuminated a series of tensions around the technical implementation details of automation. Third, our findings highlight the utility of design fiction in eliciting not-to-design implications, not only about automated NLP but also about technology more broadly. Overall, our study findings provide groundwork for the inclusion of domain experts values whose expertise lies outside of the field of computing into the design of automated NLP systems. 
    more » « less
  4. Translating information between the domains of systematics and conservation requires novel information management designs. Such designs should improve interactions across the trading zone between the domains, herein understood as the model according to which knowledge and uncertainty are productively translated in both directions (cf. Collins et al. 2019). Two commonly held attitudes stand in the way of designing a well-functioning systematics-to-conservation trading zone. On one side, there are calls to unify the knowledge signal produced by systematics, underpinned by the argument that such unification is a necessary precondition for conservation policy to be reliably expressed and enacted (e.g., Garnett et al. 2020). As a matter of legal scholarship, the argument for systematic unity by legislative necessity is principally false (Weiss 2003, MacNeil 2009, Chromá 2011), but perhaps effective enough as a strategy to win over audiences unsure about robust law-making practices in light of variable and uncertain knowledge. On the other side, there is an attitude that conservation cannot ever restrict the academic freedom of systematics as a scientific discipline (e.g., Raposo et al. 2017). This otherwise sound argument misses the mark in the context of designing a productive trading zone with conservation. The central interactional challenge is not whether the systematic knowledge can vary at a given time and/or evolve over time, but whether these signal dynamics are tractable in ways that actors can translate into robust maxims for conservation. Redesigning the trading zone should rest on the (historically validated) projection that systematics will continue to attract generations of inspired, productive researchers and broad-based societal support, frequently leading to protracted conflicts and dramatic shifts in how practioners in the field organize and identify organismal lineages subject to conservation. This confident outlook for systematics' future, in turn, should refocus the challenge of designing the trading zone as one of building better information services to model the concurrent conflicts and longer-term evolution of systematic knowledge. It would seem unreasonable to expect the International Union for Conservation of Nature (IUCN) Red List Index to develop better data science models for the dynamics of systematic knowledge (cf. Hoffmann et al. 2011) than are operational in the most reputable information systems designed and used by domain experts (Burgin et al. 2018). The reasonable challenge from conservation to systematics is not to stop being a science but to be a better data science. In this paper, we will review advances in biodiversity data science in relation to representing and reasoning over changes in systematic knowledge with computational logic, i.e., modeling systematic intelligence (Franz et al. 2016). We stress-test this approach with a use case where rapid systematic signal change and high stakes for conservation action intersect, i.e., the Malagasy mouse lemurs ( Microcebus É. Geoffroy, 1834 sec. Schüßler et al. 2020), where the number of recognized species-level concepts has risen from 2 to 25 in the span of 38 years (1982–2020). As much as scientifically defensible, we extend our modeling approach to the level of individual published occurrence records, where the inability to do so sometimes reflects substandard practice but more importantly reveals systemic inadequacies in biodiversity data science or informational modeling. In the absence of shared, sound theoretical foundations to assess taxonomic congruence or incongruence across treatments, and in the absence of biodiversity data platforms capable of propagating logic-enabled, scalable occurrence-to-concept identification events to produce alternative and succeeding distribution maps, there is no robust way to provide a knowledge signal from systematics to conservation that is both consistent in its syntax and acccurate in its semantics, in the sense of accurately reflecting the variation and uncertainty that exists across multiple systematic perspectives. Translating this diagnosis into new designs for the trading zone is only one "half" of the solution, i.e., a technical advancement that then would need to be socially endorsed and incentivized by systematic and conservation communities motivated to elevate their collaborative interactions and trade robustly in inherently variable and uncertain information. 
    more » « less
  5. Powerful corporations leverage the law to shape the regulatory environments in which they operate. A key strategy for achieving this is litigation. I ask under what conditions corporations litigate, and specifically, what happens when two repeat players, transnational agribusiness firms and local governments, face each other in court. I compare outcomes of two cases—Hawaii and Arica, Chile—documenting how different sociopolitical contexts and legal systems shape how actors engage the law. Interviews with firm managers, unions, government officials, lawyers, and advocacy organization leaders and document analysis reveal that firms seize on existing institutional norms and politics to define their localized legal strategies. Throughstrategic legalism, a defensive legal strategy that is outcome‐oriented and context‐specific, firms accomplish legal compliance and political containment of their opposition. In Hawaii, firms rely onpreemptive legality,a strategy that moves controversial issues like pesticide safety from one domain of democratic politics to another that is largely incontestable because it is preempted by a higher authority. In Chile, firms useauthoritarian legality, an approach that draws on authoritarian structures and policies within the state, to sway legal outcomes. These cases reveal the mechanisms that corporations draw on to institutionalize their power advantages through the law, offering a typology for future scholars to better understand how the strategic behavior of corporations shapes regulatory outcomes.

     
    more » « less