skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1946442

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available November 1, 2026
  2. Backdoor attacks pose a critical threat by embedding hidden triggers into inputs, causing models to misclassify them into target labels. While extensive research has focused on mitigating these attacks in object recognition models through weight fine-tuning, much less attention has been given to detecting backdoored samples directly. Given the vast datasets used in training, manual inspection for backdoor triggers is impractical, and even state-of-the-art defense mechanisms fail to fully neutralize their impact. To address this gap, we introduce a groundbreaking method to detect unseen backdoored images during both training and inference. Leveraging the transformative success of prompt tuning in Vision Language Models (VLMs), our approach trains learnable text prompts to differentiate clean images from those with hidden backdoor triggers. Experiments demonstrate the exceptional efficacy of this method, achieving an impressive average accuracy of 86% across two renowned datasets for detecting unseen backdoor triggers, establishing a new standard in backdoor defense. 
    more » « less
    Free, publicly-accessible full text available October 31, 2026
  3. Free, publicly-accessible full text available October 27, 2026
  4. Free, publicly-accessible full text available October 27, 2026
  5. Free, publicly-accessible full text available September 26, 2026
  6. Free, publicly-accessible full text available September 15, 2026
  7. The rapid advancement of Quantum Machine Learning (QML) has introduced new possibilities and challenges in the field of cybersecurity. Generative Adversarial Networks (GANs) have been used as promising tools in Machine Learning (ML) and QML for generating realistic synthetic data from existing (real) dataset which aids in the analysis, detection, and protection against adversarial attacks. In fact, Quantum Generative Adversarial Networks (QGANs) has great ability for numerical data as well as image data generation which have high-dimensional features using the property of quantum superposition. However, effectively loading datasets onto quantum computers encounters significant obstacles due to losses and inherent noise which affects performance. In this work, we study the impact of various losses during training of QGANs as well as GANs for various state-of-the-art cybersecurity datasets. This paper presents a comparative analysis of the stability of loss functions for real datasets as well as GANs generated synthetic dataset. Therefore, we conclude that QGANs demonstrate superior stability and maintain consistently lower generator loss values than traditional machine learning approaches like GANs. Consequently, experimental results indicate that the stability of the loss function is more pronounced for QGANs than GANs. 
    more » « less
    Free, publicly-accessible full text available July 22, 2026
  8. In today’s fast-paced software development environments, DevOps has revolutionized the way teams build, test, and deploy applications by emphasizing automation, collaboration, and continuous integration/continuous delivery (CI/CD). However, with these advancements comes an increased need to address security proactively, giving rise to the DevSecOps movement, which integrates security practices into every phase of the software development lifecycle. DevOps security remains underrepresented in academic curricula despite its growing importance in the industry. To address this gap, this paper presents a handson learning module that combines Chaos Engineering and Whitebox Fuzzing to teach core principles of secure DevOps practices in an authentic, scenario-driven environment. Chaos Engineering allows students to intentionally disrupt systems to observe and understand their resilience, while White-box Fuzzing enables systematic exploration of internal code paths to discover cornercase vulnerabilities that typical tests might miss. The module was deployed across three academic institutions, and both pre- and post-surveys were conducted to evaluate its impact. Pre-survey data revealed that while most students had prior experience in software engineering and cybersecurity, the majority lacked exposure to DevOps security concepts. Post-survey responses gathered through ten structured questions showed highly positive feedback 66.7% of students strongly agreed, and 22.2% agreed that the hands-on labs improved their understanding of secure DevOps practices. Participants also reported increased confidence in secure coding, vulnerability detection, and resilient infrastructure design. These findings support the integration of experiential learning techniques like chaos simulations and white-box fuzzing into security education. By aligning academic training with realworld industry needs, this module effectively prepares students for the complex challenges of modern software development and operations. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026
  9. This work introduces a novel physics-informed neural network (PINN)-based framework for modeling and optimizing false data injection (FDI) attacks on electric vehicle charging station (EVCS) networks, with a focus on centralized charging management system (CMS). By embedding the governing physical laws as constraints within the neural network’s loss function, the proposed framework enables scalable, real-time analysis of cyber-physical vulnerabilities. The PINN models EVCS dynamics under both normal and adversarial conditions while optimizing stealthy attack vectors that exploit voltage and current regulation. Evaluations on the IEEE 33-bus system demonstrate the framework’s capability to uncover critical vulnerabilities. These findings underscore the urgent need for enhanced resilience strategies in EVCS networks to mitigate emerging cyber threats targeting the power grid. Furthermore, the framework lays the groundwork for exploring a broader range of cyber-physical attack scenarios on EVCS networks, offering potential insights into their impact on power grid operations. It provides a flexible platform for studying the interplay between physical constraints and adversarial manipulations, enhancing our understanding of EVCS vulnerabilities. This approach opens avenues for future research into robust mitigation strategies and resilient design principles tailored to the evolving cybersecurity challenges in smart grid systems. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026
  10. The increasing use of high-dimensional imaging in medical AI raises significant privacy and security concerns. This paper presents a Bootstrap Your Own Latent (BYOL)-based self supervised learning (SSL) framework for secure image processing, ensuring compliance with HIPAA and privacy-preserving machine learning (PPML) techniques. Our method integrates federated learning, homomorphic encryption, and differential privacy to enhance security while reducing dependence on labeled data. Experimental results on the MNIST and NIH Chest Xray datasets demonstrate a classification accuracy of 97.5% and 99.99% (pre-fine-tuning 40%), with improved clustering performance using K-Means (Silhouette Score: 0.5247). These findings validate BYOL’s capability for robust, privacy-preserving image processing while emphasizing the need for fine-tuning to optimize classification performance. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026