Over the past several decades, environmental governance has made substantial progress in addressing environmental change, but emerging environmental problems require new innovations in law, policy, and governance. While expansive legal reform is unlikely to occur soon, there is untapped potential in existing laws to address environmental change, both by leveraging adaptive and transformative capacities within the law itself to enhance social-ecological resilience and by using those laws to allow social-ecological systems to adapt and transform. Legal and policy research to date has largely overlooked this potential, even though it offers a more expedient approach to addressing environmental change than waiting for full-scale environmental law reform. We highlight examples from the United States and the European Union of untapped capacity in existing laws for fostering resilience in social-ecological systems. We show that governments and other governance agents can make substantial advances in addressing environmental change in the short term—without major legal reform—by exploiting those untapped capacities, and we offer principles and strategies to guide such initiatives.
more »
« less
The law in computation: What machine learning, artificial intelligence, and big data mean for law and society scholarship
Abstract Computational systems, including machine learning, artificial intelligence, and big data analytics, are not only inescapable parts of social life but are also reshaping the contours of law and legal practice. We propose turning more law and social science (LSS) attention to new technological developments through the study of “law in computation,” that is, computational systems' integration with regulatory and administrative procedures, the sociotechnical infrastructures that support them, and their impact on how individuals and populations are interpellated through the law. We present a range of cases in three areas of inquiry ‐ algorithmic governance, jurisdiction and agency ‐ on issues such as immigration enforcement, data sovereignty, algorithmic warfare, biometric identity regimes, and gig economies, for which examining law in computation illuminates how new technological systems' integration with legal processes pushes the distinction between “law on the books” and “law in action” into new domains.We then propose future directions and methods for research. As computational systems become ever more sophisticated, understanding the law in computation is critical not only for LSS scholarship, but also for everyday civics.
more »
« less
- Award ID(s):
- 1724735
- PAR ID:
- 10452007
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Law & Policy
- Volume:
- 43
- Issue:
- 2
- ISSN:
- 0265-8240
- Page Range / eLocation ID:
- p. 170-199
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Privacy law is failing to protect individuals from being watched and exposed, despite stronger surveillance and data protection rules. The problem is that our rules look to social norms to set thresholds for privacy violations, but people can get used to being observed. In this Article, we argue that by ignoring de minimis privacy encroachments, the law is complicit in normalizing surveillance. Privacy law helps acclimate people to being watched by ignoring smaller, more frequent, and more mundane privacy diminutions. We call these reductions “privacy nicks,” like the proverbial “thousand cuts” that lead to death. Privacy nicks come from the proliferation of cameras and biometric sensors on doorbells, glasses, and watches, and the drift of surveillance and data analytics into new areas of our lives like travel, exercise, and social gatherings. Under our theory of privacy nicks as the Achilles heel of surveillance law, invasive practices become routine through repeated exposures that acclimate us to being vulnerable and watched in increasingly intimate ways. With acclimation comes resignation, and this shift in attitude biases how citizens and lawmakers view reasonable measures and fair tradeoffs. Because the law looks to norms and people’s expectations to set thresholds for what counts as a privacy violation, the normalization of these nicks results in a constant renegotiation of privacy standards to society’s disadvantage. When this happens, the legal and social threshold for rejecting invasive new practices keeps getting redrawn, excusing ever more aggressive intrusions. In effect, the test of what privacy law allows is whatever people will tolerate. There is no rule to stop us from tolerating everything. This Article provides a new theory and terminology to understand where privacy law falls short and suggests a way to escape the current surveillance spiral.more » « less
-
null (Ed.)A received wisdom is that automated decision-making serves as an anti-bias intervention. The conceit is that removing humans from the decision-making process will also eliminate human bias. The paradox, however, is that in some instances, automated decision-making has served to replicate and amplify bias. With a case study of the algorithmic capture of hiring as heuristic device, this Article provides a taxonomy of problematic features associated with algorithmic decision-making as anti-bias intervention and argues that those features are at odds with the fundamental principle of equal opportunity in employment. To examine these problematic features within the context of algorithmic hiring and to explore potential legal approaches to rectifying them, the Article brings together two streams of legal scholarship: law and technology studies and employment & labor law. Counterintuitively, the Article contends that the framing of algorithmic bias as a technical problem is misguided. Rather, the Article’s central claim is that bias is introduced in the hiring process, in large part, due to an American legal tradition of deference to employers, especially allowing for such nebulous hiring criterion as “cultural fit.” The Article observes the lack of legal frameworks that take into account the emerging technological capabilities of hiring tools which make it difficult to detect disparate impact. The Article thus argues for a re-thinking of legal frameworks that take into account both the liability of employers and those of the makers of algorithmic hiring systems who, as brokers, owe a fiduciary duty of care. Particularly related to Title VII, the Article proposes that in legal reasoning corollary to extant tort doctrines, an employer’s failure to audit and correct its automated hiring platforms for disparate impact could serve as prima facie evidence of discriminatory intent, leading to the development of the doctrine of discrimination per se. The article also considers other approaches separate from employment law such as establishing consumer legal protections for job applicants that would mandate their access to the dossier of information consulted by automated hiring systems in making the employment decision.more » « less
-
null (Ed.)A quiet revolution is afoot in the field of law. Technical systems employing algorithms are shaping and displacing professional decision making, and they are disrupting and restructuring relationships between law firms, lawyers, and clients. Decision-support systems marketed to legal professionals to support e-discovery—generally referred to as “technology assisted review” (TAR)—increasingly rely on “predictive coding”: machine-learning techniques to classify and predict which of the voluminous electronic documents subject to litigation should be withheld or produced to the opposing side. These systems and the companies offering them are reshaping relationships between lawyers and clients, introducing new kinds of professionals into legal practice, altering the discovery process, and shaping how lawyers construct knowledge about their cases and professional obligations. In the midst of these shifting relationships—and the ways in which these systems are shaping the construction and presentation of knowledge—lawyers are grappling with their professional obligations, ethical duties, and what it means for the future of legal practice. Through in-depth, semi-structured interviews of experts in the e-discovery technology space—the technology company representatives who develop and sell such systems to law firms and the legal professionals who decide whether and how to use them in practice—we shed light on the organizational structures, professional rules and norms, and technical system properties that are shaping and being reshaped by predictive coding systems. Our findings show that AI-supported decision systems such as these are reconfiguring professional work practices. In particular, they highlight concerns about potential loss of professional agency and skill, limited understanding and thereby both over- and under reliance on decision-support systems, and confusion about responsibility and accountability as new kinds of technical professionals and technologies are brought into legal practice. The introduction of predictive coding systems and the new professional and organizational arrangements they are ushering into legal practice compound general concerns over the opacity of technical systems with specific concerns about encroachments on the construction of expert knowledge, liability frameworks, and the potential (mis)alignment of machine reasoning with professional logic and ethics. Based on our findings, we conclude that predictive coding tools—and likely other algorithmic systems lawyers use to construct knowledge and reason about legal practice— challenge the current model for evaluating whether and how tools are appropriate for legal practice. As tools become both more complex and more consequential, it is unreasonable to rely solely on legal professionals—judges, law firms, and lawyers—to determine which technologies are appropriate for use. The legal professionals we interviewed report relying on the evaluation and judgment of a range of new technical experts within law firms and, increasingly, third-party vendors and their technical experts. This system for choosing technical systems upon which lawyers rely to make professional decisions—e.g., whether documents are responsive, or whether the standard of proportionality has been met—is no longer sufficient. As the tools of medicine are reviewed by appropriate experts before they are put out for consideration and adoption by medical professionals, we argue that the legal profession must develop new processes for determining which algorithmic tools are fit to support lawyers’ decision making. Relatedly, because predictive coding systems are used to produce lawyers’ professional judgment, we argue they must be designed for contestability— providing greater transparency, interaction, and configurability around embedded choices to ensure decisions about how to embed core professional judgments, such as relevance and proportionality, remain salient and demand engagement from lawyers, not just their technical experts.more » « less
-
The speed and uncertainty of environmental change in the Anthropocene challenge the capacity of coevolving social–ecological–technological systems (SETs) to adapt or transform to these changes. Formal government and legal structures further constrain the adaptive capacity of our SETs. However, new, self-organized forms of adaptive governance are emerging at multiple scales in natural resource-based SETs. Adaptive governance involves the private and public sectors as well as formal and informal institutions, self-organized to fill governance gaps in the traditional roles of states. While new governance forms are emerging, they are not yet doing so rapidly enough to match the pace of environmental change. Furthermore, they do not yet possess the legitimacy or capacity needed to address disparities between the winners and losers from change. These emergent forms of adaptive governance appear to be particularly effective in managing complexity. We explore governance and SETs as coevolving complex systems, focusing on legal systems to understand the potential pathways and obstacles to equitable adaptation. We explore how governments may facilitate the emergence of adaptive governance and promote legitimacy in both the process of governance despite the involvement of nonstate actors, and its adherence to democratic values of equity and justice. To manage the contextual nature of the results of change in complex systems, we propose the establishment of long-term study initiatives for the coproduction of knowledge, to accelerate learning and synergize interactions between science and governance and to foster public science and epistemic communities dedicated to navigating transitions to more just, sustainable, and resilient futures.more » « less
An official website of the United States government
