This roundtable brings together engaged anthropologists working with im/migrant communities to explore the transformational potential of accompaniment as anthropological practice. Informed by decolonial and feminist critiques of anthropology, accompaniment troubles the boundaries of scholar-activist and academic-community member to address the broader social purpose of our anthropological work. We understand accompaniment as an ethical commitment to solidarity, to using our positions of relative privilege to help ameliorate suffering. The roundtable will serve as a collective conversation about the multivalent meanings of accompaniment with im/migrant communities and as a forum to imagine possibilities for caring, relational, and decolonial forms of ethnographic engagement.
more »
« less
Understanding Contestability on the Margins: Implications for the Design of Algorithmic Decision-making in Public Services
Policymakers have established that the ability to contest decisions made by or with algorithms is core to responsible artificial intelligence (AI). However, there has been a disconnect between research on contestability of algorithms, and what the situated practice of contestation looks like in contexts across the world, especially amongst communities on the margins. We address this gap through a qualitative study of follow-up and contestation in accessing public services for land ownership in rural India and affordable housing in the urban United States. We find there are significant barriers to exercising rights and contesting decisions, which intermediaries like NGO workers or lawyers work with communities to address. We draw on the notion of accompaniment in global health to highlight the open-ended work required to support people in navigating violent social systems. We discuss the implications of our findings for key aspects of contestability, including building capacity for contestation, human review, and the role of explanations. We also discuss how sociotechnical systems of algorithmic decision-making can embody accompaniment by taking on a higher burden of preventing denials and enabling contestation.
more »
« less
- Award ID(s):
- 2107391
- PAR ID:
- 10544098
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400703300
- Page Range / eLocation ID:
- 1 to 16
- Format(s):
- Medium: X
- Location:
- Honolulu HI USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this presentation I will discuss various forms of “accompaniment” my Latinx immigrant friends, research participants, and fellow activists and advocates have engaged in during the pandemic. I will discuss accompaniment as an anthropological praxis of solidarity, focusing on how, together, we have attempted to advocate for immigrant-protective polices in the past 1.5 years, how we have navigated barriers to forms of social support and healthcare, and how our relationships have shifted in the process.more » « less
-
For applications where multiple stakeholders provide recommendations, a fair consensus ranking must not only ensure that the preferences of rankers are well represented, but must also mitigate disadvantages among socio-demographic groups in the final result. However, there is little empirical guidance on the value or challenges of visualizing and integrating fairness metrics and algorithms into human-in-the-loop systems to aid decision-makers. In this work, we design a study to analyze the effectiveness of integrating such fairness metrics-based visualization and algorithms. We explore this through a task-based crowdsourced experiment comparing an interactive visualization system for constructing consensus rankings, ConsensusFuse, with a similar system that includes visual encodings of fairness metrics and fair-rank generation algorithms, FairFuse. We analyze the measure of fairness, agreement of rankers’ decisions, and user interactions in constructing the fair consensus ranking across these two systems. In our study with 200 participants, results suggest that providing these fairness-oriented support features nudges users to align their decision with the fairness metrics while minimizing the tedious process of manually having to amend the consensus ranking. We discuss the implications of these results for the design of next-generation fairness oriented-systems and along with emerging directions for future research.more » « less
-
The Border Gateway Protocol (BGP) offers several knobs to control routing decisions, but they are coarse-grained and only affect routes received from neighboring Autonomous Systems (AS). To enhance policy expressiveness, BGP was extended with thecommunitiesattribute, allowing an AS to attach metadata to routes and influence the routing decisions of a remote AS. The metadata can carryinformationto (e.g., where a route was received) or request anactionfrom a remote AS (e.g., not to export a route to one of its neighbors). Unfortunately, the semantics of BGP communities are not standardized, lack universal rules, and are poorly documented. In this work, we design and evaluate algorithms to automatically uncover BGPaction communitiesand ASes that violate standard practices by consistently using theinformation communitiesof other ASes, revealing undocumented relationships between them (e.g., siblings). Our experimental evaluation with billions of route announcements from public BGP route collectors from 2018 to 2023 uncovers previously unknown AS relationships and shows that our algorithm for identifying action communities achieves average precision and recall of 92.5% and 86.5%, respectively.more » « less
-
In the past few years, there has been much work on incorporating fairness requirements into the design of algorithmic rankers, with contributions from the data management, algorithms, information retrieval, and recommender systems communities. In this tutorial, we give a systematic overview of this work, offering a broad perspective that connects formalizations and algorithmic approaches across subfields. During the first part of the tutorial, we present a classification framework for fairness-enhancing interventions, along which we will then relate the technical methods. This framework allows us to unify the presentation of mitigation objectives and of algorithmic techniques to help meet those objectives or identify trade-offs. Next, we discuss fairness in score-based ranking and in supervised learning-to-rank. We conclude with recommendations for practitioners, to help them select a fair ranking method based on the requirements of their specific application domain.more » « less
An official website of the United States government

