The emergence of increasingly powerful AI technologies calls for the design and development of K-12 AI literacy curricula that can support students who will be entering a profoundly changed labor market. However, developing, implementing, and scaling AI literacy curricula poses significant challenges. It will be essential to develop a robust, evidence-based AI education research foundation that can inform AI literacy curriculum development. Unlike K-12 science and mathematics education, there is not currently a research foundation for K-12 AI education. In this article we provide a component-based definition of AI literacy, present the need for implementing AI literacy education across all grade bands, and argue for the creation of research programs across four areas of AI education: (1) K-12 AI Learning & Technology; (2) K-12 AI Education Integration into STEM, Language Arts, and Social Science Education; (3) K-12 AI Professional Development for Teachers and Administrators; and (4) K-12 AI Assessment.
more »
« less
AI Literacy: Finding Common Threads between Education, Design, Policy, and Explainability
Fostering public AI literacy has been a growing area of interest at CHI for several years, and a substantial community is forming around issues such as teaching children how to build and program AI systems, designing learning experiences to broaden public understanding of AI, developing explainable AI systems, understanding how novices make sense of AI, and exploring the relationship between public policy, ethics, and AI literacy. Previous workshops related to AI literacy have been held at other conferences (e.g., SIGCSE, AAAI) that have been mostly focused on bringing together researchers and educators interested in AI education in K-12 classroom environments, an important subfield of this area. Our workshop seeks to cast a wider net that encompasses both HCI research related to introducing AI in K-12 education and also HCI research that is concerned with issues of AI literacy more broadly, including adult education, interactions with AI in the workplace, understanding how users make sense of and learn about AI systems, research on developing explainable AI (XAI) for non-expert users, and public policy issues related to AI literacy.
more »
« less
- Award ID(s):
- 2214463
- PAR ID:
- 10461747
- Date Published:
- Journal Name:
- CHI EA '23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
- Page Range / eLocation ID:
- 1 to 6
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This research-to-practice paper presents a curriculum, “AI Literacy for All,” to promote an interdisciplinary under-standing of AI, its socio-technical implications, and its practical applications for all levels of education. With the rapid evolution of artificial intelligence (AI), there is a need for AI literacy that goes beyond the traditional AI education curriculum. AI literacy has been conceptualized in various ways, including public literacy, competency building for designers, conceptual understanding of AI concepts, and domain-specific upskilling. Most of these conceptualizations were established before the public release of Generative AI (Gen-AI) tools such as ChatGPT. AI education has focused on the principles and applications of AI through a technical lens that emphasizes the mastery of AI principles, the mathematical foundations underlying these technologies, and the programming and mathematical skills necessary to implement AI solutions. The non-technical component of AI literacy has often been limited to social and ethical implications, privacy and security issues, or the experience of interacting with AI. In AI Literacy for all, we emphasize a balanced curriculum that includes technical as well as non-technical learning outcomes to enable a conceptual understanding and critical evaluation of AI technologies in an interdisciplinary socio-technical context. The paper presents four pillars of AI literacy: understanding the scope and technical dimensions of AI, learning how to interact with Gen-AI in an informed and responsible way, the socio-technical issues of ethical and responsible AI, and the social and future implications of AI. While it is important to include all learning outcomes for AI education in a Computer Science major, the learning outcomes can be adjusted for other learning contexts, including, non-CS majors, high school summer camps, the adult workforce, and the public. This paper advocates for a shift in AI literacy education to offer a more interdisciplinary socio-technical approach as a pathway to broaden participation in AI. This approach not only broadens students' perspectives but also prepares them to think critically about integrating AI into their future professional and personal lives.more » « less
-
The need for citizens to better understand the ethical and social challenges of algorithmic systems has led to a rapid proliferation of AI literacy initiatives. After reviewing the literature on AI literacy projects, we found that most educational practices in this area are based on teaching programming fundamentals, primarily to K-12 students. This leaves out citizens and those who are primarily interested in understanding the implications of automated decision- making systems, rather than in learning to code. To address these gaps, this article explores the methodological contributions of responsible AI education practices that focus first on stakeholders when designing learning experiences for different audiences and contexts. The article examines the weaknesses identified in current AI literacy projects, explains the stakeholder-first approach, and analyzes several responsible AI education case studies, to illustrate how such an approach can help overcome the aforementioned limitations. The results suggest that the stakeholder-first approach allows to address audiences beyond the usual ones in the field of AI literacy, and to incorporate new content and methodologies depending on the needs of the respective audiences, thus opening new avenues for teaching and research in the field.more » « less
-
Explanations can help users of Artificial Intelligent (AI) systems gain a better understanding of the reasoning behind the model’s decision, facilitate their trust in AI, and assist them in making informed decisions. Due to its numerous benefits in improving how users interact and collaborate with AI, this has stirred the AI/ML community towards developing understandable or interpretable models to a larger degree, while design researchers continue to study and research ways to present explanations of these models’ decisions in a coherent form. However, there is still the lack of intentional design effort from the HCI community around these explanation system designs. In this paper, we contribute a framework to support the design and validation of explainable AI systems; one that requires carefully thinking through design decisions at several important decision points. This framework captures key aspects of explanations ranging from target users, to the data, to the AI models in use. We also discuss how we applied our framework to design an explanation interface for trace link prediction of software artifacts.more » « less
-
AI is rapidly emerging as a tool that can be used by everyone, increasing its impact on our lives, society, and the economy. There is a need to develop educational programs and curricula that can increase capacity and diversity in AI as well as awareness of the implications of using AI-driven technologies. This paper reports on a workshop whose goals include developing guidelines for ensuring that we expand the diversity of people engaged in AI while expanding the capacity for AI curricula with a scope of content that will reflectthe competencies and needs of the workforce. The scope for AI education included K-Gray and considered AI knowledge and competencies as well as AI literacy (including responsible use and ethical issues). Participants discussed recommendations for metrics measuring capacity and diversity as well as strategies for increasing capacity and diversity at different level of education: K-12, undergraduate and graduate Computer Science (CS) majors and non-CS majors, the workforce, and the public.more » « less
An official website of the United States government

