Public sector leverages artificial intelligence (AI) to enhance the efficiency, transparency, and accountability of civic operations and public services. This includes initiatives such as predictive waste management, facial recognition for identification, and advanced tools in the criminal justice system. While public-sector AI can improve efficiency and accountability, it also has the potential to perpetuate biases, infringe on privacy, and marginalize vulnerable groups. Responsible AI (RAI) research aims to address these concerns by focusing on fairness and equity through participatory AI. We invite researchers, community members, and public sector workers to collaborate on designing, developing, and deploying RAI systems that enhance public sector accountability and transparency. Key topics include raising awareness of AI's impact on the public sector, improving access to AI auditing tools, building public engagement capacity, fostering early community involvement to align AI innovations with public needs, and promoting accessible and inclusive participation in AI development. The workshop will feature two keynotes, two short paper sessions, and three discussion-oriented activities. Our goal is to create a platform for exchanging ideas and developing strategies to design community-engaged RAI systems while mitigating the potential harms of AI and maximizing its benefits in the public sector.
more »
« less
Inclusive Portraits: Race-Aware Human-in-the-Loop Technology
AI has revolutionized the processing of various services, including the automatic facial verification of people. Automated approaches have demonstrated their speed and efficiency in verifying a large volume of faces, but they can face challenges when processing content from certain communities, including communities of people of color. This challenge has prompted the adoption of "human-inthe-loop" (HITL) approaches, where human workers collaborate with the AI to minimize errors. However, most HITL approaches do not consider workers’ individual characteristics and backgrounds. This paper proposes a new approach, called Inclusive Portraits (IP), that connects with social theories around race to design a racially-aware human-in-the-loop system. Our experiments have provided evidence that incorporating race into human-in-the-loop (HITL) systems for facial verification can significantly enhance performance, especially for services delivered to people of color. Our findings also highlight the importance of considering individual worker characteristics in the design of HITL systems, rather than treating workers as a homogenous group. Our research has significant design implications for developing AI-enhanced services that are more inclusive and equitable.
more »
« less
- Award ID(s):
- 2203212
- PAR ID:
- 10490570
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400703812
- Page Range / eLocation ID:
- 1 to 11
- Subject(s) / Keyword(s):
- crowd work crowdsourcing human-in-the-loop
- Format(s):
- Medium: X
- Location:
- Boston MA USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
As AI systems quickly improve in both breadth and depth of performance, they lend themselves to creating increasingly powerful and realistic agents, including the possibility of agents modeled on specific people. We anticipate that within our lifetimes it may become common practice for people to create custom AI agents to interact with loved ones and/or the broader world after death; indeed, the past year has seen a boom in startups purporting to offer such services. We call these generative ghosts since such agents will be capable of generating novel content rather than merely parroting content produced by their creator while living. In this paper, we reflect on the history of technologies for AI afterlives, including current early attempts by individual enthusiasts and startup companies to create generative ghosts. We then introduce a novel design space detailing potential implementations of generative ghosts. We use this analytic framework to ground a discussion of the practical and ethical implications of various approaches to designing generative ghosts, including potential positive and negative impacts on individuals and society. Based on these considerations, we lay out a research agenda for the AI and HCI research communities to better understand the risk/benefit landscape of this novel technology to ultimately empower people who wish to create and interact with AI afterlives to do so in a beneficial manner.more » « less
-
Fortson, Lucy; Crowston, Kevin; Kloetzer, Laure; Ponti, Marisa (Ed.)Using public support to extract information from vast datasets has become a popular method for accurately labeling wildlife data in camera trap (CT) images. However, the increasing demand for volunteer effort lengthens the time interval between data collection and our ability to draw ecological inferences or perform data-driven conservation actions. Artificial intelligence (AI) approaches are currently highly effective for species detection (i.e., whether an image contains animals or not) and labeling common species; however, it performs poorly on species rarely captured in images and those that are highly visually similar to one another. To capitalize on the best of human and AI classifying methods, we developed an integrated CT data pipeline in which AI provides an initial pass on labeling images, but is supervised and validated by humans (i.e., a “human-in-the-loop” approach). To assess classification accuracy gains, we compare the precision of species labels produced by AI and HITL protocols to a “gold standard” (GS) dataset annotated by wildlife experts. The accuracy of the AI method was species-dependent and positively correlated with the number of training images. The combined efforts of HITL led to error rates of less than 10% for 73% of the dataset and lowered the error rates for an additional 23%. For two visually similar species, human input resulted in higher error rates than AI. While integrating humans in the loop increases classification times relative to AI alone, the gains in accuracy suggest that this method is highly valuable for high-volume CT surveys.more » « less
-
Policymakers have established that the ability to contest decisions made by or with algorithms is core to responsible artificial intelligence (AI). However, there has been a disconnect between research on contestability of algorithms, and what the situated practice of contestation looks like in contexts across the world, especially amongst communities on the margins. We address this gap through a qualitative study of follow-up and contestation in accessing public services for land ownership in rural India and affordable housing in the urban United States. We find there are significant barriers to exercising rights and contesting decisions, which intermediaries like NGO workers or lawyers work with communities to address. We draw on the notion of accompaniment in global health to highlight the open-ended work required to support people in navigating violent social systems. We discuss the implications of our findings for key aspects of contestability, including building capacity for contestation, human review, and the role of explanations. We also discuss how sociotechnical systems of algorithmic decision-making can embody accompaniment by taking on a higher burden of preventing denials and enabling contestation.more » « less
-
As Artificial Intelligence (AI) continues to influence various aspects of society, the need for AI literacy education for K-12 students has grown. An increasing number of AI literacy studies aim to enhance students’ competencies in understanding, using, and critically evaluating AI systems. However, despite the vulnerabilities faced by students from underserved communities—due to factors such as socioeconomic status, gender, and race—these students remain underrepresented in existing research. To address this gap, this study focuses on leveraging the cultural capital that students acquire from their communities’ unique history and culture for AI literacy education. Education researchers have demonstrated that identifying and mobilizing cultural capital is an effective strategy for educating these populations. Through collaboration with 26 students from underserved communities—including those who are socioeconomically disadvantaged, female, or people of color—this paper identifies three types of cultural capital relevant to AI literacy education: 1) resistant capital, 2) communal capital, and 3) creative capital. The study also emphasizes that collaborative relationships between researchers and students are crucial for mobilizing cultural capital in AI literacy education research.more » « less
An official website of the United States government
