skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on October 16, 2024

Title: Integrity, Confidentiality, and Equity: Using Inquiry-Based Labs to help students understand AI and Cybersecurity

Recent advances in Artificial Intelligence (AI) have brought society closer to the long-held dream of creating machines to help with both common and complex tasks and functions. From recommending movies to detecting disease in its earliest stages, AI has become an aspect of daily life many people accept without scrutiny. Despite its functionality and promise, AI has inherent security risks that users should understand and programmers must be trained to address. The ICE (integrity, confidentiality, and equity) cybersecurity labs developed by a team of cybersecurity researchers addresses these vulnerabilities to AI models through a series of hands-on, inquiry-based labs. Through experimenting with and manipulating data models, students can experience firsthand how adversarial samples and bias can degrade the integrity, confidentiality, and equity of deep learning neural networks, as well as implement security measures to mitigate these vulnerabilities. This article addresses the pedagogical approach underpinning the ICE labs, and discusses both sample activities and technological considerations for teachers who want to implement these labs with their students.

 
more » « less
Award ID(s):
2244220
NSF-PAR ID:
10506078
Author(s) / Creator(s):
; ; ; ;
Corporate Creator(s):
Publisher / Repository:
Journal of Cybersecurity Education Research and Practice
Date Published:
Journal Name:
Journal of Cybersecurity Education Research and Practice
Volume:
2024
Issue:
1
ISSN:
2472-2707
Subject(s) / Keyword(s):
AI security Education Hands-on Learning
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Recent advances in Artificial Intelligence (AI) have brought society closer to the long-held dream of creating machines to help with both common and complex tasks and functions. From recommending movies to detecting disease in its earliest stages, AI has become an aspect of daily life many people accept without scrutiny. Despite its functionality and promise, AI has inherent security risks that users should understand and programmers must be trained to address. The ICE (integrity, confidentiality, and equity) cybersecurity labs developed by a team of cybersecurity researchers addresses these vulnerabilities to AI models through a series of hands-on, inquiry-based labs. Through experimenting with and manipulating data models, students can experience firsthand how adversarial samples and bias can degrade the integrity, confidentiality, and equity of deep learning neural networks, as well as implement security measures to mitigate these vulnerabilities. This article addresses the pedagogical approach underpinning the ICE labs, and discusses both sample activities and technological considerations for teachers who want to implement these labs with their students. 
    more » « less
  2. Recent advances in Artificial Intelligence (AI) have brought society closer to the long-held dream of creating machines to help with both common and complex tasks and functions. From recommending movies to detecting disease in its earliest stages, AI has become an aspect of daily life many people accept without scrutiny. Despite its functionality and promise, AI has inherent security risks that users should understand and programmers must be trained to address. The ICE (integrity, confidentiality, and equity) cybersecurity labs developed by a team of cybersecurity researchers addresses these vulnerabilities to AI models through a series of hands-on, inquiry-based labs. Through experimenting with and manipulating data models, students can experience firsthand how adversarial samples and bias can degrade the integrity, confidentiality, and equity of deep learning neural networks, as well as implement security measures to mitigate these vulnerabilities. This article addresses the pedagogical approach underpinning the ICE labs, and discusses both sample activities and technological considerations for teachers who want to implement these labs with their students. 
    more » « less
  3. Existing research has primarily delved into the realm of computer science outreach aimed at K-12 students, with a focus on both informal and non-formal approaches. However, a noticeable research gap exists when it comes to cybersecurity outreach tailored specifically for underserved secondary school students. This article addresses this void by presenting an iterative pilot of a cybersecurity curriculum. This innovative curriculum integrates a one-week summer camp and a series of 1.5-hour workshops designed to provide students with a comprehensive understanding of cybersecurity. The overarching goal of this approach is to foster wider participation in the field of computing, particularly in the realm of cybersecurity. This research aims to spark interest among students who may currently face limited access to computing resources. The cybersecurity lessons featured in this curriculum adhere to the standards set by Cyber.org, an organization supported by the Cybersecurity and Infrastructure Agency (CISA). Key topics covered include networking, the confidentiality, integrity, and availability (CIA) triad, and operating system security. This paper not only outlines the process of creating and implementing these cybersecurity lessons but also emphasizes the iterative refinement process they underwent. The discussion primarily revolves around the valuable insights gained from implementing this curriculum at two prominent public universities in the eastern United States. By bridging the research gap and focusing on practical applications, this initiative contributes significantly to the broader discourse on cybersecurity education for underserved secondary school students. 
    more » « less
  4. Students, especially those outside the field of cybersecurity, are increasingly turning to Large Language Model (LLM)-based generative AI tools for coding assistance. These AI code generators provide valuable support to developers by generating code based on provided input and instructions. However, the quality and accuracy of the generated code can vary, depending on factors such as task complexity, the clarity of instructions, and the model’s familiarity with the programming language. Additionally, these generated codes may inadvertently utilize vulnerable built-in functions, potentially leading to source code vulnerabilities and exploits. This research undertakes an in-depth analysis and comparison of code generation, code completion, and security suggestions offered by prominent AI models, including OpenAI CodeX, CodeBert, and ChatGPT. The research aims to evaluate the effectiveness and security aspects of these tools in terms of their code generation, code completion capabilities, and their ability to enhance security. This analysis serves as a valuable resource for developers, enabling them to proactively avoid introducing security vulnerabilities in their projects. By doing so, developers can significantly reduce the need for extensive revisions and resource allocation, whether in the short or long term.

     
    more » « less
  5. Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized natural language understanding and generation. They possess deep language comprehension, human-like text generation capabilities, contextual awareness, and robust problem-solving skills, making them invaluable in various domains (e.g., search engines, customer support, translation). In the meantime, LLMs have also gained traction in the security community, revealing security vulnerabilities and showcasing their potential in security-related tasks. This paper explores the intersection of LLMs with security and privacy. Specifically, we investigate how LLMs positively impact security and privacy, potential risks and threats associated with their use, and inherent vulnerabilities within LLMs. Through a comprehensive literature review, the paper categorizes the papers into “The Good” (beneficial LLM applications), “The Bad” (offensive applications), and “The Ugly” (vulnerabilities of LLMs and their defenses). We have some interesting findings. For example, LLMs have proven to enhance code security (code vulnerability detection) and data privacy (data confidentiality protection), outperforming traditional methods. However, they can also be harnessed for various attacks (particularly user-level attacks) due to their human-like reasoning abilities. We have identified areas that require further research efforts. For example, Research on model and parameter extraction attacks is limited and often theoretical, hindered by LLM parameter scale and confidentiality. Safe instruction tuning, a recent development, requires more exploration. We hope that our work can shed light on the LLMs’ potential to both bolster and jeopardize cybersecurity. 
    more » « less