skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Governing with Algorithmic Impact Assessments: Six Observations
Algorithmic impact assessments (AIA) are increasingly being proposed as a mechanism for algorithmic accountability. These assessments are seen as potentially useful for anticipating, avoiding, and mitigating the negative consequences of algorithmic decision-making systems (ADS). At the same time, what an AIA would entail remains under-specified. While promising, AIAs raise as many questions as they answer. Choices about the methods, scope, and purpose of impact assessments structure the possible governance outcomes. Decisions about what type of effects count as an impact, when impacts are assessed, whose interests are considered, who is invited to participate, who conducts the assessment, the public availability of the assessment, and what the outputs of the assessment might be all shape the forms of accountability that AIA proponents seek to encourage. These considerations remain open, and will determine whether and how AIAs can function as a viable governance mechanism in the broader algorithmic accountability toolkit, especially with regard to furthering the public interest. Because AlAs are still an incipient governance strategy, approaching them as social constructions that do not require a single or universal approach offers a chance to produce interventions that emerge from careful deliberation.  more » « less
Award ID(s):
1704369
PAR ID:
10283952
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
AAAI / ACM Conference on Artificial Intelligence, Ethics, and Society (AIES)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Algorithmic impact assessments (AIAs) are an emergent form of accountability for entities that build and deploy automated decision-support systems. These are modeled after impact assessments in other domains. Our study of the history of impact assessments shows that "impacts" are an evaluative construct that enable institutions to identify and ameliorate harms experienced because of a policy decision or system. Every domain has different expectations and norms about what constitutes impacts and harms, how potential harms are rendered as the impacts of a particular undertaking, who is responsible for conducting that assessment, and who has the authority to act on the impact assessment to demand changes to that undertaking. By examining proposals for AIAs in relation to other domains, we find that there is a distinct risk of constructing algorithmic impacts as organizationally understandable metrics that are nonetheless inappropriately distant from the harms experienced by people, and which fall short of building the relationships required for effective accountability. To address this challenge of algorithmic accountability, and as impact assessments become a commonplace process for evaluating harms, the FAccT community should A) understand impacts as objects constructed for evaluative purposes, B) attempt to construct impacts as close as possible to actual harms, and C) recognize that accountability governance requires the input of various types of expertise and affected communities. We conclude with lessons for assembling cross-expertise consensus for the co-construction of impacts and to build robust accountability relationships. 
    more » « less
  2. In widely used sociological descriptions of how accountability is structured through institutions, an “actor” (e.g., the developer) is accountable to a “forum” (e.g., regulatory agencies) empowered to pass judgements on and demand changes from the actor or enforce sanctions. However, questions about structuring accountability persist: why and how is a forum compelled to keep making demands of the actor when such demands are called for? To whom is a forum accountable in the performance of its responsibilities, and how can its practices and decisions be contested? In the context of algorithmic accountability, we contend that a robust accountability regime requires a triadic relationship, wherein the forum is also accountable to another entity: the public(s). Typically, as is the case with environmental impact assessments, public(s) make demands upon the forum's judgements and procedures through the courts, thereby establishing a minimum standard of due diligence. However, core challenges relating to: (1) lack of documentation, (2) difficulties in claiming standing, and (3) struggles around admissibility of expert evidence on and achieving consensus over the workings of algorithmic systems in adversarial proceedings prevent the public from approaching the courts when faced with algorithmic harms. In this paper, we demonstrate that the courts are the primary route—and the primary roadblock—in the pursuit of redress for algorithmic harms. Courts often find algorithmic harms non-cognizable and rarely require developers to address material claims of harm. To address the core challenges of taking algorithms to court, we develop a relational approach to algorithmic accountability that emphasizes not what the actors do nor the results of their actions, but rather how interlocking relationships of accountability are constituted in a triadic relationship between actors, forums, and public(s). As is the case in other regulatory domains, we believe that impact assessments (and similar accountability documentation) can provide the grounds for contestation between these parties, but only when that triad is structured such that the public(s) are able to cohere around shared experiences and interests, contest the outcomes of algorithmic systems that affect their lives, and make demands upon the other parties. Where courts now find algorithmic harms non-cognizable, an impact assessment regime can potentially create procedural rights to protect substantive rights of the public(s). This would require algorithmic accountability policies currently under consideration to provide the public(s) with adequate standing in courts, and opportunities to access and contest the actor's documentation and the forum's judgments. 
    more » « less
  3. Governance reforms like decentralization and performance-based management aim to improve public services by increasing accountability among street-level bureaucrats: bureaucrats may be held to account by communities, supervisors, intermediary organizations, or all of these. To assess the relationship between accountability and bureaucratic effort, we utilize data from a lab-in-the-field behavioral experiment conducted with Honduran health workers across decentralized and centrally administered municipalities. We presented health workers with an incentivized effort task that included instructions that were neutral, had a bottom-up political accountability prompt, or a top-down bureaucratic accountability prompt. Our results show that administrative context moderates the accountability-to-effort relationship. With neutral instructions, civil servants in decentralized systems exert greater quality effort than their counterparts under centralized administration. Importantly, both accountability prompts increase quality effort in centrally administered settings to levels comparable with those in decentralized settings. These findings support multiple accountability as a potentially important mechanism linking decentralization reform to improved service delivery. 
    more » « less
  4. Abstract Impact assessment is an important and cost‐effective tool for assisting in the identification and prioritization of invasive alien species. With the number of alien and invasive alien species expected to increase, reliance on impact assessment tools for the identification of species that pose the greatest threats will continue to grow. Given the importance of such assessments for management and resource allocation, it is critical to understand the uncertainty involved and what effect this may have on the outcome. Using an uncertainty typology and insects as a model taxon, we identified and classified the causes and types of uncertainty when performing impact assessments on alien species. We assessed 100 alien insect species across two rounds of assessments with each species independently assessed by two assessors. Agreement between assessors was relatively low for all three impact classification components (mechanism, severity, and confidence) after the first round of assessments. For the second round, we revised guidelines and gave assessors access to each other’s assessments which improved agreement by between 20% and 30% for impact mechanism, severity, and confidence. Of the 12 potential reasons for assessment discrepancies identified a priori, 11 were found to occur. The most frequent causes (and types) of uncertainty (i.e., differences between assessment outcomes for the same species) were as follows: incomplete information searches (systematic error), unclear mechanism and/or extent of impact (subjective judgment due to a lack of knowledge), and limitations of the assessment framework (context dependence). In response to these findings, we identify actions that may reduce uncertainty in the impact assessment process, particularly for assessing speciose taxa with diverse life histories such as Insects. Evidence of environmental impact was available for most insect species, and (of the non‐random original subset of species assessed) 14 of those with evidence were identified as high impact species (with either major or massive impact). Although uncertainty in risk assessment, including impact assessments, can never be eliminated, identifying, and communicating its cause and variety is a first step toward its reduction and a more reliable assessment outcome, regardless of the taxa being assessed. 
    more » « less
  5. Rules are a critical component of the functioning of nearly every online community, yet it is challenging for community moderators to make data-driven decisions about what rules to set for their communities. The connection between a community's rules and how its membership feels about its governance is not well understood. In this work, we conduct the largest-to-date analysis of rules on Reddit, collecting a set of 67,545 unique rules across 5,225 communities which collectively account for more than 67% of all content on Reddit. More than just a point-in-time study, our work measures how communities change their rules over a 5+ year period. We develop a method to classify these rules using a taxonomy of 17 key attributes extended from previous work. We assess what types of rules are most prevalent, how rules are phrased, and how they vary across communities of different types. Using a dataset of communities' discussions about their governance, we are the first to identify the rules most strongly associated with positive community perceptions of governance: rules addressing who participates, how content is formatted and tagged, and rules about commercial activities. We conduct a longitudinal study to quantify the impact of adding new rules to communities, finding that after a rule is added, community perceptions of governance immediately improve, yet this effect diminishes after six months. Our results have important implications for platforms, moderators, and researchers. We make our classification model and rules datasets public to support future research on this topic. 
    more » « less