Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Continuous-time dynamics models, e.g., neural ordinary differential equations, enable accurate modeling of underlying dynamics in time-series data. However, employing neural networks for parameterizing dynamics makes it challenging for humans to identify dependence structures, especially in the presence of delayed effects. In consequence, these models are not an attractive option when capturing dependence carries more importance than accurate modeling, e.g., in tsunami forecasting. In this paper, we present a novel method for identifying dependence structures in continuous-time dynamics models. We take a two-step approach: (1) During training, we promote weight sparsity in the model’s first layer during training. (2) We prune the sparse weights after training to identify dependence structures. In evaluation, we test our method in scenarios where the exact dependence structures of time-series are known. Compared to baselines, our method is more effective in uncovering dependence structures in data even when there are delayed effects. Moreover, we evaluate our method to a real-world tsunami forecasting, where the exact dependence structures are unknown beforehand. Even in this challenging scenario, our method still effective learns physically-consistent dependence structures and achieves high accuracy in forecasting.more » « lessFree, publicly-accessible full text available October 21, 2025
-
Free, publicly-accessible full text available July 2, 2025
-
Free, publicly-accessible full text available July 2, 2025
-
Free, publicly-accessible full text available July 2, 2025
-
This paper presents an innovative approach to DevOps security education, addressing the dynamic landscape of cybersecurity threats. We propose a student-centered learning methodology by developing comprehensive hands-on learning modules. Specifically, we introduce labware modules designed to automate static security analysis, empowering learners to identify known vulnerabilities efficiently. These modules offer a structured learning experience with pre-lab, hands-on, and post-lab sections, guiding students through DevOps concepts and security challenges. In this paper, we introduce hands-on learning modules that familiarize students with recognizing known security flaws through the application of Git Hooks. Through practical exercises with real-world code examples containing security flaws, students gain proficiency in detecting vulnerabilities using relevant tools. Initial evaluations conducted across educational institutions indicate that these hands-on modules foster student interest in software security and cybersecurity and equip them with practical skills to address DevOps security vulnerabilities.more » « less
-
Large Language Models (LLMs) have extensive ability to produce promising output. Nowadays, people are increasingly relying on them due to easy accessibility, rapid and outstanding outcomes. However, the use of these results without appropriate scrutiny poses serious security risks, particularly when they are integrated with other software, APIs, or plugins. This is because the LLM outputs are highly dependent on the prompts they receive. Therefore, it is essential to carefully clean these outputs before using them in additional software environments. This paper is designed to teach students about the potential dangers of contaminated LLM output within the context of web development through prelab, hands-on, and postlab experiences. Hands-on lab provides practical guidance on how to handle LLM vulnerabilities to make applications safe with some real-world examples in Python. This approach aims to provide students with a deeper understanding of the precautions necessary to ensure software against the vulnerabilities introduced by LLM output.more » « lessFree, publicly-accessible full text available July 31, 2025
-
Free, publicly-accessible full text available July 2, 2025
-
Free, publicly-accessible full text available July 2, 2025
-
This paper presents an innovative approach to DevOps security education, addressing the dynamic landscape of cybersecurity threats. We propose a student-centered learning methodology by developing comprehensive hands-on learning modules. Specifically, we introduce labware modules designed to automate static security analysis, empowering learners to identify known vulnerabilities efficiently. These modules offer a structured learning experience with pre-lab, hands-on, and post-lab sections, guiding students through DevOps concepts and security challenges. In this paper, we introduce hands-on learning modules that familiarize students with recognizing known security flaws through the application of Git Hooks. Through practical exercises with real-world code examples containing security flaws, students gain proficiency in detecting vulnerabilities using relevant tools. Initial evaluations conducted across educational institutions indicate that these hands-on modules foster student interest in software security and cybersecurity and equip them with practical skills to address DevOps security vulnerabilities.more » « lessFree, publicly-accessible full text available July 2, 2025
-
Large Language Models (LLMs) have extensive ability to produce promising output. Nowadays, people are increasingly relying on them due to easy accessibility, rapid and outstanding outcomes. However, the use of these results without appropriate scrutiny poses serious security risks, particularly when they are integrated with other software, APIs, or plugins. This is because the LLM outputs are highly dependent on the prompts they receive. Therefore, it is essential to carefully clean these outputs before using them in additional software environments. This paper is designed to teach students about the potential dangers of contaminated LLM output within the context of web development through prelab, handson, and postlab experiences. Hands-on lab provides practical guidance on how to handle LLM vulnerabilities to make applications safe with some real-world examples in Python. This approach aims to provide students with a deeper understanding of the precautions necessary to ensure software against the vulnerabilities introduced by LLM output.more » « lessFree, publicly-accessible full text available July 2, 2025