With the increasing prevalence of large language models (LLMs) such as ChatGPT, there is a growing need to integrate natural language processing (NLP) into K-12 education to better prepare young learners for the future AI landscape. NLP, a sub-field of AI that serves as the foundation of LLMs and many advanced AI applications, holds the potential to enrich learning in core subjects in K-12 classrooms. In this experience report, we present our efforts to integrate NLP into science classrooms with 98 middle school students across two US states, aiming to increase students’ experience and engagement with NLP models through textual data analyses and visualizations. We designed learning activities, developed an NLP-based interactive visualization platform, and facilitated classroom learning in close collaboration with middle school science teachers. This experience report aims to contribute to the growing body of work on integrating NLP into K-12 education by providing insights and practical guidelines for practitioners, researchers, and curriculum designers.
more »
« less
AI-writing tools in education: if you can’t beat them, join them
Abstract The release and rapid diffusion of ChatGPT has forced teachers and researchers around the world to grapple with the consequences of artificial intelligence (AI) for education. For second language educators, AI-generated writing tools such as ChatGPT present special challenges that must be addressed to better support learners. We propose a five-part pedagogical framework that seeks to support second language learners through acknowledging both the immediate and long-term contexts in which we must teach students about these tools: understand, access, prompt, corroborate, and incorporate. By teaching our students how to effectively partner with AI, we can better prepare them for the changing landscape of technology use in the world beyond the classroom.
more »
« less
- Award ID(s):
- 2315294
- PAR ID:
- 10524949
- Publisher / Repository:
- DeGuyter
- Date Published:
- Journal Name:
- Journal of China Computer-Assisted Language Learning
- Volume:
- 3
- Issue:
- 2
- ISSN:
- 2748-3479
- Page Range / eLocation ID:
- 258 to 262
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Great Power Brings Great Responsibility: Personalizing Conversational AI for Diverse Problem-SolversNewcomers onboarding to Open Source Software (OSS) projects face many challenges. Large Language Models (LLMs), like ChatGPT, have emerged as potential resources for answering questions and providing guidance, with many developers now turning to ChatGPT over traditional Q&A sites like Stack Overflow. Nonetheless, LLMs may carry biases in presenting information, which can be especially impactful for newcomers whose problem-solving styles may not be broadly represented. This raises important questions about the accessibility of AI-driven support for newcomers to OSS projects. This vision paper outlines the potential of adapting AI responses to various problem-solving styles to avoid privileging a particular subgroup. We discuss the potential of AI persona-based prompt engineering as a strategy for interacting with AI. This study invites further research to refine AI-based tools to better support contributions to OSS projects.more » « less
-
Abstract In the face of climate change, climate literacy is becoming increasingly important. With wide access to generative AI tools, such as OpenAI’s ChatGPT, we explore the potential of AI platforms for ordinary citizens asking climate literacy questions. Here, we focus on a global scale and collect responses from ChatGPT (GPT-3.5 and GPT-4) on climate change-related hazard prompts over multiple iterations by utilizing the OpenAI’s API and comparing the results with credible hazard risk indices. We find a general sense of agreement in comparisons and consistency in ChatGPT over the iterations. GPT-4 displayed fewer errors than GPT-3.5. Generative AI tools may be used in climate literacy, a timely topic of importance, but must be scrutinized for potential biases and inaccuracies moving forward and considered in a social context. Future work should identify and disseminate best practices for optimal use across various generative AI tools.more » « less
-
Building a skilled cybersecurity workforce is paramount to building a safer digital world. However, the diverse skill set, constantly emerging vulnerabilities, and deployment of new cyber threats make learning cybersecurity challenging. Traditional education methods struggle to cope with cybersecurity's rapidly evolving landscape and keep students engaged and motivated. Different studies on students' behaviors show that an interactive mode of education by engaging through a question-answering system or dialoguing is one of the most effective learning methodologies. There is a strong need to create advanced AI-enabled education tools to promote interactive learning in cybersecurity. Unfortunately, there are no publicly available standard question-answer datasets to build such systems for students and novice learners to learn cybersecurity concepts, tools, and techniques. The education course material and online question banks are unstructured and need to be validated and updated by domain experts, which is tedious when done manually. In this paper, we propose CyberGen, a novel unification of large language models (LLMs) and knowledge graphs (KG) to generate the questions and answers for cybersecurity automatically. Augmenting the structured knowledge from knowledge graphs in prompts improves factual reasoning and reduces hallucinations in LLMs. We used the knowledge triples from cybersecurity knowledge graphs (AISecKG) to design prompts for ChatGPT and generate questions and answers using different prompting techniques. Our question-answer dataset, CyberQ, contains around 4k pairs of questions and answers. The domain expert manually evaluated the random samples for consistency and correctness. We train the generative model using the CyberQ dataset for question answering task.more » « less
-
Abstract Explainability and Safety engender trust. These require a model to exhibit consistency and reliability. To achieve these, it is necessary to use and analyzedataandknowledgewith statistical and symbolic AI methods relevant to the AI application––neither alone will do. Consequently, we argue and seek to demonstrate that the NeuroSymbolic AI approach is better suited for making AI a trusted AI system. We present the CREST framework that shows howConsistency,Reliability, user‐levelExplainability, andSafety are built on NeuroSymbolic methods that use data and knowledge to support requirements for critical applications such as health and well‐being. This article focuses on Large Language Models (LLMs) as the chosen AI system within the CREST framework. LLMs have garnered substantial attention from researchers due to their versatility in handling a broad array of natural language processing (NLP) scenarios. As examples, ChatGPT and Google's MedPaLM have emerged as highly promising platforms for providing information in general and health‐related queries, respectively. Nevertheless, these models remain black boxes despite incorporating human feedback and instruction‐guided tuning. For instance, ChatGPT can generateunsafe responsesdespite instituting safety guardrails. CREST presents a plausible approach harnessing procedural and graph‐based knowledge within a NeuroSymbolic framework to shed light on the challenges associated with LLMs.more » « less