The emergence of increasingly powerful AI technologies calls for the design and development of K-12 AI literacy curricula that can support students who will be entering a profoundly changed labor market. However, developing, implementing, and scaling AI literacy curricula poses significant challenges. It will be essential to develop a robust, evidence-based AI education research foundation that can inform AI literacy curriculum development. Unlike K-12 science and mathematics education, there is not currently a research foundation for K-12 AI education. In this article we provide a component-based definition of AI literacy, present the need for implementing AI literacy education across all grade bands, and argue for the creation of research programs across four areas of AI education: (1) K-12 AI Learning & Technology; (2) K-12 AI Education Integration into STEM, Language Arts, and Social Science Education; (3) K-12 AI Professional Development for Teachers and Administrators; and (4) K-12 AI Assessment.
more »
« less
AI Literacy: Finding Common Threads between Education, Design, Policy, and Explainability
Fostering public AI literacy has been a growing area of interest at CHI for several years, and a substantial community is forming around issues such as teaching children how to build and program AI systems, designing learning experiences to broaden public understanding of AI, developing explainable AI systems, understanding how novices make sense of AI, and exploring the relationship between public policy, ethics, and AI literacy. Previous workshops related to AI literacy have been held at other conferences (e.g., SIGCSE, AAAI) that have been mostly focused on bringing together researchers and educators interested in AI education in K-12 classroom environments, an important subfield of this area. Our workshop seeks to cast a wider net that encompasses both HCI research related to introducing AI in K-12 education and also HCI research that is concerned with issues of AI literacy more broadly, including adult education, interactions with AI in the workplace, understanding how users make sense of and learn about AI systems, research on developing explainable AI (XAI) for non-expert users, and public policy issues related to AI literacy.
more »
« less
- Award ID(s):
- 2214463
- PAR ID:
- 10461747
- Date Published:
- Journal Name:
- CHI EA '23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
- Page Range / eLocation ID:
- 1 to 6
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The need for citizens to better understand the ethical and social challenges of algorithmic systems has led to a rapid proliferation of AI literacy initiatives. After reviewing the literature on AI literacy projects, we found that most educational practices in this area are based on teaching programming fundamentals, primarily to K-12 students. This leaves out citizens and those who are primarily interested in understanding the implications of automated decision- making systems, rather than in learning to code. To address these gaps, this article explores the methodological contributions of responsible AI education practices that focus first on stakeholders when designing learning experiences for different audiences and contexts. The article examines the weaknesses identified in current AI literacy projects, explains the stakeholder-first approach, and analyzes several responsible AI education case studies, to illustrate how such an approach can help overcome the aforementioned limitations. The results suggest that the stakeholder-first approach allows to address audiences beyond the usual ones in the field of AI literacy, and to incorporate new content and methodologies depending on the needs of the respective audiences, thus opening new avenues for teaching and research in the field.more » « less
-
Explanations can help users of Artificial Intelligent (AI) systems gain a better understanding of the reasoning behind the model’s decision, facilitate their trust in AI, and assist them in making informed decisions. Due to its numerous benefits in improving how users interact and collaborate with AI, this has stirred the AI/ML community towards developing understandable or interpretable models to a larger degree, while design researchers continue to study and research ways to present explanations of these models’ decisions in a coherent form. However, there is still the lack of intentional design effort from the HCI community around these explanation system designs. In this paper, we contribute a framework to support the design and validation of explainable AI systems; one that requires carefully thinking through design decisions at several important decision points. This framework captures key aspects of explanations ranging from target users, to the data, to the AI models in use. We also discuss how we applied our framework to design an explanation interface for trace link prediction of software artifacts.more » « less
-
Artificial intelligence (AI) has rapidly pervaded and reshaped almost all walks of life, but efforts to promote AI literacy in K-12 schools remain limited. There is a knowledge gap in how to prepare teachers to teach AI literacy in inclusive classrooms and how teacher-led classroom implementations can impact students. This paper reports a comparison study to investigate the effectiveness of an AI literacy curriculum when taught by classroom teachers. The experimental group included 89 middle school students who learned an AI literacy curriculum during regular school hours. The comparison group consisted of 69 students who did not learn the curriculum. Both groups completed the same pre and post-test. The results show that students in the experimental group developed a deeper understanding of AI concepts and more positive attitudes toward AI and its impact on future careers after the curriculum than those in the comparison group. This shows that the teacher-led classroom implementation successfully equipped students with a conceptual understanding of AI. Students achieved significant gains in recognizing how AI is relevant to their lives and felt empowered to thrive in the age of AI. Overall this study confirms the potential of preparing K-12 classroom teachers to offer AI education in classrooms in order to reach learners of diverse backgrounds and broaden participation in AI literacy education among young learners.more » « less
-
The rapid expansion of Artificial Intelligence (AI) necessitates a need for educating students to become knowledgeable of AI and aware of its interrelated technical, social, and human implications. The latter (ethics) is particularly important to K-12 students because they may have been interacting with AI through everyday technology without realizing it. They may be targeted by AI generated fake content on social media and may have been victims of algorithm bias in AI applications of facial recognition and predictive policing. To empower students to recognize ethics related issues of AI, this paper reports the design and implementation of a suite of ethics activities embedded in the Developing AI Literacy (DAILy) curriculum. These activities engage students in investigating bias of existing technologies, experimenting with ways to mitigate potential bias, and redesigning the YouTube recommendation system in order to understand different aspects of AI-related ethics issues. Our observations of implementing these lessons among adolescents and exit interviews show that students were highly engaged and became aware of potential harms and consequences of AI tools in everyday life after these ethics lessons.more » « less