The objective of this paper is to establish the fundamental public value principles that should govern safe and trusted artificial intelligence (AI). Public value is a dynamic concept that encompasses several dimensions. AI itself has evolved quite rapidly in the last few years, especially with the swift escalation of Generative AI. Governments around the world are grappling with how to govern AI, just as technologists ring alarm bells about the future consequences of AI. Our paper extends the debate on AI governance that is focused on ethical values of beneficence to that of economic values of public good. Viewed as a public good, AI use is beyond the control of the creators. Towards this end, the paper examined AI policies in the United States and Europe. We postulate three principles from a public values perspective: (i) ensuring security and privacy of each individual (or entity); (ii) ensuring trust in AI systems is verifiable; and (iii) ensuring fair and balanced AI protocols, wherein the underlying components of data and algorithms are contestable and open to public debate.
more »
« less
Net versus relative impacts in public policy automation: a conjoint analysis of attitudes of Black Americans
Abstract The use of algorithms and automated systems, especially those leveraging artificial intelligence (AI), has been exploding in the public sector, but their use has been controversial. Ethicists, public advocates, and legal scholars have debated whether biases in AI systems should bar their use or if the potential net benefits, especially toward traditionally disadvantaged groups, justify even greater expansion. While this debate has become voluminous, no scholars of which we are aware have conducted experiments with the groups affected by these policies about how they view the trade-offs. We conduct a set of two conjoint experiments with a high-quality sample of 973 Americans who identify as Black or African American in which we randomize the levels of inter-group disparity in outcomes and the net effect on such adverse outcomes in two highly controversial contexts: pre-trial detention and traffic camera ticketing. The results suggest that respondents are willing to tolerate some level of disparity in outcomes in exchange for certain net improvements for their community. These results turn this debate from an abstract ethical argument into an evaluation of political feasibility and policy design based on empirics.
more »
« less
- Award ID(s):
- 2131504
- PAR ID:
- 10523184
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- AI & SOCIETY
- Volume:
- 40
- Issue:
- 4
- ISSN:
- 0951-5666
- Format(s):
- Medium: X Size: p. 2571-2583
- Size(s):
- p. 2571-2583
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Fostering public AI literacy has been a growing area of interest at CHI for several years, and a substantial community is forming around issues such as teaching children how to build and program AI systems, designing learning experiences to broaden public understanding of AI, developing explainable AI systems, understanding how novices make sense of AI, and exploring the relationship between public policy, ethics, and AI literacy. Previous workshops related to AI literacy have been held at other conferences (e.g., SIGCSE, AAAI) that have been mostly focused on bringing together researchers and educators interested in AI education in K-12 classroom environments, an important subfield of this area. Our workshop seeks to cast a wider net that encompasses both HCI research related to introducing AI in K-12 education and also HCI research that is concerned with issues of AI literacy more broadly, including adult education, interactions with AI in the workplace, understanding how users make sense of and learn about AI systems, research on developing explainable AI (XAI) for non-expert users, and public policy issues related to AI literacy.more » « less
-
How can the public sector use AI ethically and responsibly for the benefit of people? The sustainable development and deployment of artificial intelligence (AI) in the public sector requires dialogue and deliberation between developers, decision makers, deployers, end users, and the public. This paper contributes to the debate on how to develop persuasive government approaches for steering the development and use of AI. We examine the ethical issues and the role of the public in the debate on developing public sector governance of socially and democratically sustainable and technology-intensive societies. To concretize this discussion, we study the co-development of a Finnish national AI program AuroraAI, which aims to provide citizens with tailored and timely services for different life situations, utilizing AI. With the help of this case study, we investigate the challenges posed by the development and use of AI in the service of public administration. We draw particular attention to the efforts made by the AuroraAI Ethics Board in deliberating the AuroraAI solution options and working toward a sustainable and inclusive AI society.more » « less
-
Public sector leverages artificial intelligence (AI) to enhance the efficiency, transparency, and accountability of civic operations and public services. This includes initiatives such as predictive waste management, facial recognition for identification, and advanced tools in the criminal justice system. While public-sector AI can improve efficiency and accountability, it also has the potential to perpetuate biases, infringe on privacy, and marginalize vulnerable groups. Responsible AI (RAI) research aims to address these concerns by focusing on fairness and equity through participatory AI. We invite researchers, community members, and public sector workers to collaborate on designing, developing, and deploying RAI systems that enhance public sector accountability and transparency. Key topics include raising awareness of AI's impact on the public sector, improving access to AI auditing tools, building public engagement capacity, fostering early community involvement to align AI innovations with public needs, and promoting accessible and inclusive participation in AI development. The workshop will feature two keynotes, two short paper sessions, and three discussion-oriented activities. Our goal is to create a platform for exchanging ideas and developing strategies to design community-engaged RAI systems while mitigating the potential harms of AI and maximizing its benefits in the public sector.more » « less
-
Artificial Intelligence (AI) systems for mental healthcare (MHCare) have been ever-growing after realizing the importance of early interventions for patients with chronic mental health (MH) conditions. Social media (SocMedia) emerged as the go-to platform for supporting patients seeking MHCare. The creation of peer-support groups without social stigma has resulted in patients transitioning from clinical settings to SocMedia supported interactions for quick help. Researchers started exploring SocMedia content in search of cues that showcase correlation or causation between different MH conditions to design better interventional strategies. User-level Classification-based AI systems were designed to leverage diverse SocMedia data from various MH conditions, to predict MH conditions. Subsequently, researchers created classification schemes to measure the severity of each MH condition. Such ad-hoc schemes, engineered features, and models not only require a large amount of data but fail to allow clinically acceptable and explainable reasoning over the outcomes. To improve Neural-AI for MHCare, infusion of clinical symbolic knowledge that clinicans use in decision making is required. An impactful use case of Neural-AI systems in MH is conversational systems. These systems require coordination between classification and generation to facilitate humanistic conversation in conversational agents (CA). Current CAs with deep language models lack factual correctness, medical relevance, and safety in their generations, which intertwine with unexplainable statistical classification techniques. This lecture-style tutorial will demonstrate our investigations into Neuro-symbolic methods of infusing clinical knowledge to improve the outcomes of Neural-AI systems to improve interventions for MHCare:(a) We will discuss the use of diverse clinical knowledge in creating specialized datasets to train Neural-AI systems effectively. (b) Patients with cardiovascular disease express MH symptoms differently based on gender differences. We will show that knowledge-infused Neural-AI systems can identify gender-specific MH symptoms in such patients. (c) We will describe strategies for infusing clinical process knowledge as heuristics and constraints to improve language models in generating relevant questions and responses.more » « less