skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 25, 2026

Title: "I Would Never Trust Anything Western": Kumu (Educator) Perspectives on Use of LLMs for Culturally Revitalizing CS Education in Hawaiian Schools
As large language models (LLMs) become increasingly integrated into educational technology, their potential to assist in developing curricula has gained interest among educators. Despite this growing attention, their applicability in culturally responsive Indigenous educational settings like Hawai‘i’s public schools and Kaiapuni (immersion language) programs, remains understudied. Additionally, ‘Ōlelo Hawai‘i, the Hawaiian language, as a low-resource language, poses unique challenges and concerns about cultural sensitivity and the reliability of generated content. Through surveys and interviews with kumu (educators), this study explores the perceived benefits and limitations of using LLMs for culturally revitalizing computer science (CS) education in Hawaiian public schools with Kaiapuni programs. Our findings highlight AI’s time-saving advantages while exposing challenges such as cultural misalignment and reliability concerns. We conclude with design recommendations for future AI tools to better align with Hawaiian cultural values and pedagogical practices, towards the broader goal of trustworthy, effective, and culturally grounded AI technologies.  more » « less
Award ID(s):
2345488
PAR ID:
10648943
Author(s) / Creator(s):
 ;  ;  ;  ;  
Publisher / Repository:
ACM
Date Published:
Page Range / eLocation ID:
1 to 10
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Although development of Artificial Intelligence (AI) technologies has been underway for decades, the acceleration of AI capabilities and rapid expansion of user access in the past few years has elicited public excitement as well as alarm. Leaders in government and academia, as well as members of the public, are recognizing the critical need for the ethical production and management of AI. As a result, society is placing immense trust in engineering undergraduate and graduate programs to train future developers of AI in their ethical and public welfare responsibilities. In this paper, we investigate whether engineering master’s students believe they receive the training they need from their educational curricula to negotiate this complex ethical landscape. The goal of the broader project is to understand how engineering students become public welfare “watchdogs”; i.e., how they learn to recognize and respond to their public welfare responsibilities. As part of this project, we conducted in-depth interviews with 62 electrical and computer engineering master’s students at a large public university about their educational experiences and understanding of engineers’ professional responsibilities, including those related specifically to AI technologies. This paper asks, (1) do engineering master’s students see potential dangers of AI related to how the technologies are developed, used, or possibly misused? (2) Do they feel equipped to handle the challenges of these technologies and respond ethically when faced with difficult situations? (3) Do they hold their engineering educators accountable for training them in ethical concerns around AI? We find that although some engineering master’s students see exciting possibilities of AI, most are deeply concerned about the ethical and public welfare issues that accompany its advancement and deployment. While some students feel equipped to handle these challenges, the majority feel unprepared to manage these complex situations in their professional work. Additionally, students reported that the ethical development and application of technologies like AI is often not included in curricula or are viewed as “soft skills” that are not as important as “technical” knowledge. Although some students we interviewed shared the sense of apathy toward these topics that they see from their engineering program, most were eager to receive more training in AI ethics. These results underscore the pressing need for engineering education programs, including graduate programs, to integrate comprehensive ethics, public responsibility, and whistleblower training within their curricula to ensure that the engineers of tomorrow are well-equipped to address the novel ethical dilemmas of AI that are likely to arise in the coming years. 
    more » « less
  2. Background and Context. This innovative practice full paper describes the development and implementation of a professional development (PD) opportunity for secondary teachers to learn about ChatGPT. Incorporating generative AI techniques from Large Language Models (LLMs) such as ChatGPT into educational environments offers unprecedented opportunities and challenges. Prior research has highlighted their potential to personalize feedback, assist in lesson planning, generate educational content, and reduce teachers' workload, alongside concerns such as academic integrity and student privacy. However, the rapid adoption of LLMs since ChatGPT's public release in late 2022 has left educators, particularly at the secondary level, with a lack of clear guidance on how LLMs work and can be effectively adopted. Objective. This study aims to introduce a comprehensive, free, and vetted ChatGPT course tailored for secondary teachers, with the objective of enhancing their technological competencies in LLMs and fostering innovative teaching practices. Method. We developed a five-session interactive course on ChatGPT capabilities, limitations, prompt-engineering techniques, ethical considerations, and strategies for incorporating ChatGPT into teaching. We introduced the course to six middle and high school teachers. Our curriculum emphasized active learning through peer discussions, hands-on activities, and project-based learning. We conducted pre- and post-course focus groups to determine the effectiveness of the course and the extent to which teachers' attitudes toward the use of LLMs in schools had changed. To identify trends in knowledge and attitudes, we asked teachers to complete feedback forms at the end of each of the five sessions. We performed a thematic analysis to classify teacher quotes from focus groups' transcripts as positive, negative, and neutral and calculated the ratio of positive to negative comments in the pre- and post-focus groups. We also analyzed their feedback on each individual session. Finally, we interviewed all participants five months after course completion to understand the longer-term impacts of the course. Findings. Our participants unanimously shared that all five of the sessions provided a deeper understanding of ChatGPT, featured enough opportunities for hands-on practice, and achieved their learning objectives. Our thematic analysis underlined that teachers gained a more positive and nuanced understanding of ChatGPT after the course. This change is evidenced quantitatively by the fact that quotes with positive connotations rose from 45% to 68% of the total number of positive and negative quotes. Participants shared that in the longer term, the course improved their professional development, understanding of ChatGPT, and teaching practices. Implications. This research underscores the effectiveness of active learning in professional development settings, particularly for technological innovations in computing like LLMs. Our findings suggest that introducing teachers to LLM tools through active learning can improve their work processes and give them a thorough and accurate understanding of how these tools work. By detailing our process and providing a model for similar initiatives, our work contributes to the broader discourse on teaching professional educators about computing and integrating emerging technologies in educational and professional development settings. 
    more » « less
  3. The emergence of ChatGPT, an AI-powered language model, has sparked numerous debates and discussions. In educational research, scholars have raised significant questions regarding the potential, limitations, and ethical concerns around the use of this technology. While research on the application and implications of ChatGPT in academic settings exists, analysis of the perspectives of high-school students are limited. In this study, we use qualitative content analysis to explore the perspectives of high-school students regarding the integration or ban of ChatGPT in their schools through the lens of the Technology Acceptance Model (TAM2). Data was sourced from students’ comments to a New York Times Learning Network article. Findings revealed that students' perceptions about integrating or banning ChatGPT in schools are influenced by their assessments of the technology’s usefulness, personal experiences, societal technology trends, and ethical considerations. Our findings suggest that student perspectives in this study align with those of educators and policymakers while also possessing unique perspectives that cater to their specific needs and experiences. Implications emphasize the significance of an inclusive decision-making process around the integration of AI schools in educational contexts, including students alongside other stakeholders. 
    more » « less
  4. This experience report describes two years of work integrating coding with Micro:bits and Makecode into a Hawaiian immersion bilingual school setting to teach computer science (CS) skills in a place-based approach. This report highlights the collaborative partnerships and programs between a public Hawaiian immersion school, a non-profit organization that manages important cultural sites, and a university lab that develops sustainable technology. Students identified the importance of sustainability in computing by engaging with past, present, and future technologies in culturally relevant contexts. We describe ongoing work to improve the way we support students and teachers in a Hawaiian-immersion bilingual school setting. 
    more » « less
  5. In the pursuit of place-based, generative AI educational technologies, the field of Human-Computer Interaction (HCI) offers a powerful framework for identifying and addressing diverse user needs. In partnership an Hawaiian language immersion (Kaiapuni) school and 13 educators, this 1-year case study presents a research approach rooted in assets-based design and Design Thinking that leverages rapid iteration, usability testing, and speculative prototyping to co-design a generative AI tool for Kaiapuni educators. Our synthesis of observations, participant reflections, and usability testing feedback provides evidence for such methods in their ability to envision ideal outcomes for Kaiapuni education supported by generative AI technologies. 
    more » « less