skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Public Value Principles for Secure and Trusted AI
The objective of this paper is to establish the fundamental public value principles that should govern safe and trusted artificial intelligence (AI). Public value is a dynamic concept that encompasses several dimensions. AI itself has evolved quite rapidly in the last few years, especially with the swift escalation of Generative AI. Governments around the world are grappling with how to govern AI, just as technologists ring alarm bells about the future consequences of AI. Our paper extends the debate on AI governance that is focused on ethical values of beneficence to that of economic values of public good. Viewed as a public good, AI use is beyond the control of the creators. Towards this end, the paper examined AI policies in the United States and Europe. We postulate three principles from a public values perspective: (i) ensuring security and privacy of each individual (or entity); (ii) ensuring trust in AI systems is verifiable; and (iii) ensuring fair and balanced AI protocols, wherein the underlying components of data and algorithms are contestable and open to public debate.  more » « less
Award ID(s):
1924154
PAR ID:
10527225
Author(s) / Creator(s):
;
Publisher / Repository:
ACM
Date Published:
ISBN:
9798400709883
Page Range / eLocation ID:
251 to 257
Subject(s) / Keyword(s):
Artificial Intelligence Security Public value Public good
Format(s):
Medium: X
Location:
Taipei Taiwan
Sponsoring Org:
National Science Foundation
More Like this
  1. How can the public sector use AI ethically and responsibly for the benefit of people? The sustainable development and deployment of artificial intelligence (AI) in the public sector requires dialogue and deliberation between developers, decision makers, deployers, end users, and the public. This paper contributes to the debate on how to develop persuasive government approaches for steering the development and use of AI. We examine the ethical issues and the role of the public in the debate on developing public sector governance of socially and democratically sustainable and technology-intensive societies. To concretize this discussion, we study the co-development of a Finnish national AI program AuroraAI, which aims to provide citizens with tailored and timely services for different life situations, utilizing AI. With the help of this case study, we investigate the challenges posed by the development and use of AI in the service of public administration. We draw particular attention to the efforts made by the AuroraAI Ethics Board in deliberating the AuroraAI solution options and working toward a sustainable and inclusive AI society. 
    more » « less
  2. While offering the potential to support learning interactions, emerging AI applications like Large Language Models (LLMs) come with ethical concerns. Grounding technology design in human values can address AI ethics and ensure adoption. To this end, we apply Value‐Sensitive Design—involving empirical, conceptual and technical investigations—to centre human values in the development and evaluation of LLM‐based chatbots within a high school environmental science curriculum. Representing multiple perspectives and expertise, the chatbots help students refine their causal models of climate change's impact on local marine ecosystems, communities and individuals. We first perform an empirical investigation leveraging participatory design to explore the values that motivate students and educators to engage with the chatbots. Then, we conceptualize the values that emerge from the empirical investigation by grounding them in research in ethical AI design, human values, human‐AI interactions and environmental education. Findings illuminate considerations for the chatbots to support students' identity development, well‐being, human–chatbot relationships and environmental sustainability. We further map the values onto design principles and illustrate how these principles can guide the development and evaluation of the chatbots. Our research demonstrates how to conduct contextual, value‐sensitive inquiries of emergent AI technologies in educational settings. Practitioner notesWhat is already known about this topicGenerative artificial intelligence (GenAI) technologies like Large Language Models (LLMs) can not only support learning, but also raise ethical concerns such as transparency, trust and accountability.Value‐sensitive design (VSD) presents a systematic approach to centring human values in technology design.What this paper addsWe apply VSD to design LLM‐based chatbots in environmental education and identify values central to supporting students' learning.We map the values emerging from the VSD investigations to several stages of GenAI technology development: conceptualization, development and evaluation.Implications for practice and/or policyIdentity development, well‐being, human–AI relationships and environmental sustainability are key values for designing LLM‐based chatbots in environmental education.Using educational stakeholders' values to generate design principles and evaluation metrics for learning technologies can promote technology adoption and engagement. 
    more » « less
  3. This research-to-practice paper presents a curriculum, “AI Literacy for All,” to promote an interdisciplinary under-standing of AI, its socio-technical implications, and its practical applications for all levels of education. With the rapid evolution of artificial intelligence (AI), there is a need for AI literacy that goes beyond the traditional AI education curriculum. AI literacy has been conceptualized in various ways, including public literacy, competency building for designers, conceptual understanding of AI concepts, and domain-specific upskilling. Most of these conceptualizations were established before the public release of Generative AI (Gen-AI) tools such as ChatGPT. AI education has focused on the principles and applications of AI through a technical lens that emphasizes the mastery of AI principles, the mathematical foundations underlying these technologies, and the programming and mathematical skills necessary to implement AI solutions. The non-technical component of AI literacy has often been limited to social and ethical implications, privacy and security issues, or the experience of interacting with AI. In AI Literacy for all, we emphasize a balanced curriculum that includes technical as well as non-technical learning outcomes to enable a conceptual understanding and critical evaluation of AI technologies in an interdisciplinary socio-technical context. The paper presents four pillars of AI literacy: understanding the scope and technical dimensions of AI, learning how to interact with Gen-AI in an informed and responsible way, the socio-technical issues of ethical and responsible AI, and the social and future implications of AI. While it is important to include all learning outcomes for AI education in a Computer Science major, the learning outcomes can be adjusted for other learning contexts, including, non-CS majors, high school summer camps, the adult workforce, and the public. This paper advocates for a shift in AI literacy education to offer a more interdisciplinary socio-technical approach as a pathway to broaden participation in AI. This approach not only broadens students' perspectives but also prepares them to think critically about integrating AI into their future professional and personal lives. 
    more » « less
  4. Responsible AI (RAI) is the science and practice of ensuring the design, development, use, and oversight of AI are socially sustainable---benefiting diverse stakeholders while controlling the risks. Achieving this goal requires active engagement and participation from the broader public. This paper introduces We are AI: Taking Control of Technology, a public education course that brings the topics of AI and RAI to the general audience in a peer-learning setting. We outline the goals behind the course's development, discuss the multi-year iterative process that shaped its creation, and summarize its content. We also discuss two offerings of We are AI to an active and engaged group of librarians and professional staff at New York University, highlighting successes and areas for improvement. The course materials, including a multilingual comic book series by the same name, are publicly available and can be used independently. By sharing our experience in creating and teaching We are AI, we aim to introduce these resources to the community of AI educators, researchers, and practitioners, supporting their public education efforts. 
    more » « less
  5. AI is rapidly emerging as a tool that can be used by everyone, increasing its impact on our lives, society, and the economy. There is a need to develop educational programs and curricula that can increase capacity and diversity in AI as well as awareness of the implications of using AI-driven technologies. This paper reports on a workshop whose goals include developing guidelines for ensuring that we expand the diversity of people engaged in AI while expanding the capacity for AI curricula with a scope of content that will reflectthe competencies and needs of the workforce. The scope for AI education included K-Gray and considered AI knowledge and competencies as well as AI literacy (including responsible use and ethical issues). Participants discussed recommendations for metrics measuring capacity and diversity as well as strategies for increasing capacity and diversity at different level of education: K-12, undergraduate and graduate Computer Science (CS) majors and non-CS majors, the workforce, and the public. 
    more » « less