skip to main content


Search for: All records

Award ID contains: 1946442

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available August 12, 2025
  2. This paper presents an innovative approach to DevOps security education, addressing the dynamic landscape of cybersecurity threats. We propose a student-centered learning methodology by developing comprehensive hands-on learning modules. Specifically, we introduce labware modules designed to automate static security analysis, empowering learners to identify known vulnerabilities efficiently. These modules offer a structured learning experience with pre-lab, hands-on, and post-lab sections, guiding students through DevOps concepts and security challenges. In this paper, we introduce hands-on learning modules that familiarize students with recognizing known security flaws through the application of Git Hooks. Through practical exercises with real-world code examples containing security flaws, students gain proficiency in detecting vulnerabilities using relevant tools. Initial evaluations conducted across educational institutions indicate that these hands-on modules foster student interest in software security and cybersecurity and equip them with practical skills to address DevOps security vulnerabilities. 
    more » « less
    Free, publicly-accessible full text available July 2, 2025
  3. Large Language Models (LLMs) have extensive ability to produce promising output. Nowadays, people are increasingly relying on them due to easy accessibility, rapid and outstanding outcomes. However, the use of these results without appropriate scrutiny poses serious security risks, particularly when they are integrated with other software, APIs, or plugins. This is because the LLM outputs are highly dependent on the prompts they receive. Therefore, it is essential to carefully clean these outputs before using them in additional software environments. This paper is designed to teach students about the potential dangers of contaminated LLM output within the context of web development through prelab, handson, and postlab experiences. Hands-on lab provides practical guidance on how to handle LLM vulnerabilities to make applications safe with some real-world examples in Python. This approach aims to provide students with a deeper understanding of the precautions necessary to ensure software against the vulnerabilities introduced by LLM output. 
    more » « less
    Free, publicly-accessible full text available July 2, 2025
  4. Large Language Models (LLMs) have extensive ability to produce promising output. Nowadays, people are increasingly relying on them due to easy accessibility, rapid and outstanding outcomes. However, the use of these results without appropriate scrutiny poses serious security risks, particularly when they are integrated with other software, APIs, or plugins. This is because the LLM outputs are highly dependent on the prompts they receive. Therefore, it is essential to carefully clean these outputs before using them in additional software environments. This paper is designed to teach students about the potential dangers of contaminated LLM output within the context of web development through prelab, handson, and postlab experiences. Hands-on lab provides practical guidance on how to handle LLM vulnerabilities to make applications safe with some real-world examples in Python. This approach aims to provide students with a deeper understanding of the precautions necessary to ensure software against the vulnerabilities introduced by LLM output. 
    more » « less
    Free, publicly-accessible full text available July 2, 2025
  5. Free, publicly-accessible full text available July 2, 2025
  6. Free, publicly-accessible full text available July 2, 2025
  7. Free, publicly-accessible full text available July 2, 2025
  8. Free, publicly-accessible full text available July 2, 2025
  9. Free, publicly-accessible full text available July 2, 2025
  10. Free, publicly-accessible full text available July 2, 2025