In widely used sociological descriptions of how accountability is structured through institutions, an “actor” (e.g., the developer) is accountable to a “forum” (e.g., regulatory agencies) empowered to pass judgements on and demand changes from the actor or enforce sanctions. However, questions about structuring accountability persist: why and how is a forum compelled to keep making demands of the actor when such demands are called for? To whom is a forum accountable in the performance of its responsibilities, and how can its practices and decisions be contested? In the context of algorithmic accountability, we contend that a robust accountability regime requires a triadic relationship, wherein the forum is also accountable to another entity: the public(s). Typically, as is the case with environmental impact assessments, public(s) make demands upon the forum's judgements and procedures through the courts, thereby establishing a minimum standard of due diligence. However, core challenges relating to: (1) lack of documentation, (2) difficulties in claiming standing, and (3) struggles around admissibility of expert evidence on and achieving consensus over the workings of algorithmic systems in adversarial proceedings prevent the public from approaching the courts when faced with algorithmic harms. In this paper, we demonstrate that the courts are the primary route—and the primary roadblock—in the pursuit of redress for algorithmic harms. Courts often find algorithmic harms non-cognizable and rarely require developers to address material claims of harm. To address the core challenges of taking algorithms to court, we develop a relational approach to algorithmic accountability that emphasizes not what the actors do nor the results of their actions, but rather how interlocking relationships of accountability are constituted in a triadic relationship between actors, forums, and public(s). As is the case in other regulatory domains, we believe that impact assessments (and similar accountability documentation) can provide the grounds for contestation between these parties, but only when that triad is structured such that the public(s) are able to cohere around shared experiences and interests, contest the outcomes of algorithmic systems that affect their lives, and make demands upon the other parties. Where courts now find algorithmic harms non-cognizable, an impact assessment regime can potentially create procedural rights to protect substantive rights of the public(s). This would require algorithmic accountability policies currently under consideration to provide the public(s) with adequate standing in courts, and opportunities to access and contest the actor's documentation and the forum's judgments.
more »
« less
CounterFAccTual: How FAccT Undermines Its Organizing Principles
This essay joins recent scholarship in arguing that FAccT's fundamental framing of the potential to achieve the normative conditions for justice through bettering the design of algorithmic systems is counterproductive to achieving said justice in practice. Insofar as the FAccT community's research tends to prioritize design-stage interventions, it ignores the fact that the majority of the contextual factors that practically determine FAccT outcomes happen in the implementation and impact stages of AI/ML lifecycles. We analyze an emergent and widely-cited movement within the FAccT community for attempting to honor the centrality of contextual factors in shaping social outcomes, a set of strategies we term ‘metadata maximalism’. Symptomatic of design-centered approaches, metadata maximalism abstracts away its reliance on institutions and structures of justice that are, by every observable metric, already struggling (where not failing) to provide accessible, enforceable rights. These justice infrastructures, moreover, are currently wildly under-equipped to manage the disputes arising from digital transformation and machine learning. The political economy of AI/ML implementation provides further obstructions to realizing rights. Data and software supply chains, in tandem with intellectual property protections, introduce structural sources of opacity. Where duties of care to vulnerable persons should reign, profit incentives are given legal and regulatory primacy. Errors are inevitable and inextricable from the development of machine learning systems. In the face of these realities, FAccT programs, including metadata maximalism, tend to project their efforts in a fundamentally counter-factual universe: one in which functioning institutions and processes for due diligence in implementation and for redress of harms are working and ready to interoperate with. Unfortunately, in our world, these institutions and processes have been captured by the interests they are meant to hold accountable, intentionally hollowed-out, and/or were never designed to function in today's sociotechnical landscape. Continuing to produce (fair! accountable! transparent!) data-enabled systems that operate in high-impact areas, irrespective of this landscape's radically insufficient paths to justice, given the unavoidability of errors and/or intentional misuse in implementation, and the exhaustively-demonstrated disproportionate distribution of resulting harms onto already-marginalized communities, is a choice - a choice to be CounterFAccTual.
more »
« less
- Award ID(s):
- 1828010
- PAR ID:
- 10344140
- Date Published:
- Journal Name:
- FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency
- Page Range / eLocation ID:
- 1982 to 1992
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this paper, we explore the role that theories of Social Justice from the Engineering Education literature may play in the field of AI-HRI, explore the extent to which the recommendations made by such theories are or are not already being followed in our community, and envision a future in which our research community takes guidance from these theories. In particular, we explore the recent past and envisioned futures of AI-HRI through the lens of the Engineering for Social Justice (E4SJ) framework due to its emphasis on contextual listening and enhancement of human capabilities.more » « less
-
Artificial Intelligence (AI) and Machine Learning (ML) capabilities have the potential for large-scale impact to tackle some of the world’s most pressing humanitarian challenges and help alleviate the suffering of millions of people. Although AI and ML systems have been leveraged and deployed by many humanitarian organizations, it remains unclear which factors contributed to their successful implementation and adoption. In this study, we aim to understand what it takes to deploy AI and ML capabilities successfully within the humanitarian ecosystem and identify challenges to be overcome. This preliminary research examines the deployment and application of an ML model developed by the Danish Refugee Council (DRC) for predicting forced displacement. We use qualitative methods to identify key barriers and enablers from a variety of sources describing the deployment of their Foresight model, a machine learning-based predictive tool. These results can help the humanitarian community to better understand enablers and barriers for deploying and scaling up AI and ML solutions. We hope this paper can spark discussions about the successful deployments of AI and ML capabilities and encourage sharing of best practices by the humanitarian community.more » « less
-
This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society’s most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench “bias,” are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI’s long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.more » « less
-
Nurses face significant physical demands during patient care, leading to high rates of musculoskeletal disorders (MSDs) among nurses in long-term care. Exoskeletons demonstrate promise in supporting nurses and nurse managers with MSDs; however, social contextual factors are crucial to their design and implementation. Through thematic analysis of 17 semi-structured interviews, this paper reveals social contextual factors important to exoskeleton use among nurses and nurse managers in long-term care. Participants expressed concerns about workplace discrimination, co-worker perceptions of their capabilities, and patient confidence. Our findings highlight the need for supportive organizational cultures and open communication channels. Recommendations include in-depth systems analysis to assess exoskeleton feasibility and efficacy, involving input from frontline nurses/managers, management, and patients. These findings can aid human factors and ergonomics (HF/E) experts in balancing social contextual factors and other work system elements to design work system contexts and exoskeletons that promote optimal outcomes in long-term care settings.more » « less
An official website of the United States government

