Recent advances in Artificial Intelligence (AI) have brought society closer to the long-held dream of creating machines to help with both common and complex tasks and functions. From recommending movies to detecting disease in its earliest stages, AI has become an aspect of daily life many people accept without scrutiny. Despite its functionality and promise, AI has inherent security risks that users should understand and programmers must be trained to address. The ICE (integrity, confidentiality, and equity) cybersecurity labs developed by a team of cybersecurity researchers addresses these vulnerabilities to AI models through a series of hands-on, inquiry-based labs. Through experimenting with and manipulating data models, students can experience firsthand how adversarial samples and bias can degrade the integrity, confidentiality, and equity of deep learning neural networks, as well as implement security measures to mitigate these vulnerabilities. This article addresses the pedagogical approach underpinning the ICE labs, and discusses both sample activities and technological considerations for teachers who want to implement these labs with their students.
more »
« less
Integrity, Confidentiality, and Equity: Using Inquiry-Based Labs to help students understand AI and Cybersecurity
Recent advances in Artificial Intelligence (AI) have brought society closer to the long-held dream of creating machines to help with both common and complex tasks and functions. From recommending movies to detecting disease in its earliest stages, AI has become an aspect of daily life many people accept without scrutiny. Despite its functionality and promise, AI has inherent security risks that users should understand and programmers must be trained to address. The ICE (integrity, confidentiality, and equity) cybersecurity labs developed by a team of cybersecurity researchers addresses these vulnerabilities to AI models through a series of hands-on, inquiry-based labs. Through experimenting with and manipulating data models, students can experience firsthand how adversarial samples and bias can degrade the integrity, confidentiality, and equity of deep learning neural networks, as well as implement security measures to mitigate these vulnerabilities. This article addresses the pedagogical approach underpinning the ICE labs, and discusses both sample activities and technological considerations for teachers who want to implement these labs with their students.
more »
« less
- Award ID(s):
- 2315596
- PAR ID:
- 10526505
- Publisher / Repository:
- The University Libraries of KSU
- Date Published:
- Journal Name:
- The Journal of Cybersecurity Education Research and Practice (JCERP) i
- ISSN:
- 2472-2707
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Recent advances in Artificial Intelligence (AI) have brought society closer to the long-held dream of creating machines to help with both common and complex tasks and functions. From recommending movies to detecting disease in its earliest stages, AI has become an aspect of daily life many people accept without scrutiny. Despite its functionality and promise, AI has inherent security risks that users should understand and programmers must be trained to address. The ICE (integrity, confidentiality, and equity) cybersecurity labs developed by a team of cybersecurity researchers addresses these vulnerabilities to AI models through a series of hands-on, inquiry-based labs. Through experimenting with and manipulating data models, students can experience firsthand how adversarial samples and bias can degrade the integrity, confidentiality, and equity of deep learning neural networks, as well as implement security measures to mitigate these vulnerabilities. This article addresses the pedagogical approach underpinning the ICE labs, and discusses both sample activities and technological considerations for teachers who want to implement these labs with their students.more » « less
-
Creating engaging cybersecurity education materials typically requires months of development time and specialized expertise. This paper describes how we used generative AI to address this challenge. We utilized Claude AI to generate a complete interactive platform that teaches students basic microelectronics through IoT hacking. Through iterative prompting, we generated more than 15,000 lines of functional code, including interactive visualizations, Python security tools, and gamified quizzes with real-time leaderboards. The curriculum guides students through the evolution of computing—from vacuum tubes to modern IoT devices—then helps them apply this foundation to discover real vulnerabilities. We implemented this platform at a GenCyber summer camp with 40 participants, where students identified actual security issues in AmpliPi audio systems—opensource network audio devices designed for multi-room audio distribution—including password weaknesses and denial of service flaws. The entire development process took only three weeks instead of the typical several months. The AI produced quality educational content, although we reviewed everything for technical accuracy and ethical considerations. During the camp, students remained engaged through competitive elements and hands-on labs, learning both theoretical concepts and practical skills. The students used AI-generated tools, including working implementations of SlowLoris and dictionary attacks, to test real systems. Our experience demonstrates that generative AI can efficiently create effective cybersecurity education materials that remain technically current. All materials are publicly available on GitHub for educational use. This approach could help educators stay on track with the rapidly evolving technology despite traditional curriculum development constraints.more » « less
-
Artificial Intelligence (AI) technologies have become increasingly pervasive in our daily lives. Recent breakthroughs such as large language models (LLMs) are being increasingly used globally to enhance their work methods and boost productivity. However, the advent of these technologies has also brought forth new challenges in the critical area of social cybersecurity. While AI has broadened new frontiers in addressing social issues, such as cyberharassment and cyberbullying, it has also worsened existing social issues such as the generation of hateful content, bias, and demographic prejudices. Although the interplay between AI and social cybersecurity has gained much attention from the research community, very few educational materials have been designed to engage students by integrating AI and socially relevant cybersecurity through an interdisciplinary approach. In this paper, we present our newly designed open-learning platform, which can be used to meet the ever-increasing demand for advanced training in the intersection of AI and social cybersecurity. The designed platform, which consists of hands-on labs and education materials, incorporates the latest research results in AI-based social cybersecurity, such as cyberharassment detection, AI bias and prejudice, and adversarial attacks on AI-powered systems, are implemented using Jupyter Notebook, an open-source interactive computing platform for effective hands-on learning. Through a user study of 201 students from two universities, we demonstrate that students have a better understanding of AI-based social cybersecurity issues and mitigation after doing the labs, and they are enthusiastic about learning to use AI algorithms in addressing social cybersecurity challenges for social good.more » « less
-
Existing research has primarily delved into the realm of computer science outreach aimed at K-12 students, with a focus on both informal and non-formal approaches. However, a noticeable research gap exists when it comes to cybersecurity outreach tailored specifically for underserved secondary school students. This article addresses this void by presenting an iterative pilot of a cybersecurity curriculum. This innovative curriculum integrates a one-week summer camp and a series of 1.5-hour workshops designed to provide students with a comprehensive understanding of cybersecurity. The overarching goal of this approach is to foster wider participation in the field of computing, particularly in the realm of cybersecurity. This research aims to spark interest among students who may currently face limited access to computing resources. The cybersecurity lessons featured in this curriculum adhere to the standards set by Cyber.org, an organization supported by the Cybersecurity and Infrastructure Agency (CISA). Key topics covered include networking, the confidentiality, integrity, and availability (CIA) triad, and operating system security. This paper not only outlines the process of creating and implementing these cybersecurity lessons but also emphasizes the iterative refinement process they underwent. The discussion primarily revolves around the valuable insights gained from implementing this curriculum at two prominent public universities in the eastern United States. By bridging the research gap and focusing on practical applications, this initiative contributes significantly to the broader discourse on cybersecurity education for underserved secondary school students.more » « less
An official website of the United States government

