skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Shahriar, Hossain"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The rapid advancement of Quantum Machine Learning (QML) has introduced new possibilities and challenges in the field of cybersecurity. Generative Adversarial Networks (GANs) have been used as promising tools in Machine Learning (ML) and QML for generating realistic synthetic data from existing (real) dataset which aids in the analysis, detection, and protection against adversarial attacks. In fact, Quantum Generative Adversarial Networks (QGANs) has great ability for numerical data as well as image data generation which have high-dimensional features using the property of quantum superposition. However, effectively loading datasets onto quantum computers encounters significant obstacles due to losses and inherent noise which affects performance. In this work, we study the impact of various losses during training of QGANs as well as GANs for various state-of-the-art cybersecurity datasets. This paper presents a comparative analysis of the stability of loss functions for real datasets as well as GANs generated synthetic dataset. Therefore, we conclude that QGANs demonstrate superior stability and maintain consistently lower generator loss values than traditional machine learning approaches like GANs. Consequently, experimental results indicate that the stability of the loss function is more pronounced for QGANs than GANs. 
    more » « less
    Free, publicly-accessible full text available July 22, 2026
  2. The integration of quantum computing with knowledge graphs presents a transformative approach to intelligent information processing that enables enhanced reasoning, semantic understanding, and large-scale data inference. This study introduces a Quantum Knowledge Graph (QKG) framework that combines Neo4j’s LLM Knowledge Graph Builder with Quantum Natural Language Processing (QNLP) to improve the representation, retrieval, and inference of complex knowledge structures. The proposed methodology involves extracting structured relationships from unstructured text, converting them into quantum-compatible representations using Lambeq, and executing quantum circuits via Qiskit to compute quantum embeddings. Using superposition and entanglement, the QKG framework enables parallel relationship processing, contextual entity disambiguation, and more efficient semantic association. These enhancements address the limitations of classical knowledge graphs, such as deterministic representations, scalability constraints, and inefficiencies in the capture of complex relationships. This research highlights the importance of integrating quantum computing with knowledge graphs, offering a scalable, adaptive, and semantically enriched approach to intelligent data processing. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026
  3. This work introduces a novel physics-informed neural network (PINN)-based framework for modeling and optimizing false data injection (FDI) attacks on electric vehicle charging station (EVCS) networks, with a focus on centralized charging management system (CMS). By embedding the governing physical laws as constraints within the neural network’s loss function, the proposed framework enables scalable, real-time analysis of cyber-physical vulnerabilities. The PINN models EVCS dynamics under both normal and adversarial conditions while optimizing stealthy attack vectors that exploit voltage and current regulation. Evaluations on the IEEE 33-bus system demonstrate the framework’s capability to uncover critical vulnerabilities. These findings underscore the urgent need for enhanced resilience strategies in EVCS networks to mitigate emerging cyber threats targeting the power grid. Furthermore, the framework lays the groundwork for exploring a broader range of cyber-physical attack scenarios on EVCS networks, offering potential insights into their impact on power grid operations. It provides a flexible platform for studying the interplay between physical constraints and adversarial manipulations, enhancing our understanding of EVCS vulnerabilities. This approach opens avenues for future research into robust mitigation strategies and resilient design principles tailored to the evolving cybersecurity challenges in smart grid systems. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026
  4. With the rapid growth of technology, accessing digital health records has become increasingly easier. Especially mobile health technology like mHealth apps help users to manage their health information, as well as store, share and access medical records and treatment information. Along with this huge advancement, mHealth apps are increasingly at risk of exposing protected health information (PHI) when security measures are not adequately implemented. The Health Insurance Portability and Accountability Act (HIPAA) ensures the secure handling of PHI, and mHealth applications are required to comply with its standards. But it is unfortunate to note that many mobile and mHealth app developers, along with their security teams, lack sufficient awareness of HIPAA regulations, leading to inadequate implementation of compliance measures. Moreover, the implementation of HIPAA security should be integrated into applications from the earliest stages of development to ensure data security and regulatory adherence throughout the software lifecycle. This highlights the need for a comprehensive framework that supports developers from the initial stages of mHealth app development and fosters HIPAA compliance awareness among security teams and end users. An iOS framework has been designed for integration into the Integrated Development Environment(IDE), accompanied by a web application to visualize HIPAA security concerns in mHealth app development. The web application is intended to guide both developers and security teams on HIPAA compliance, offering insights on incorporating regulations into source code, with the IDE framework enabling the identification and resolution of compliance violations during development. The aim is to encourage the design of secure and compliant mHealth applications that effectively safeguard personal health information. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026
  5. Large Language Models (LLMs) have demonstrated exceptional capabilities in the field of Artificial Intelligence (AI) and are now widely used in various applications globally. However, one of their major challenges is handling high-concurrency workloads, especially under extreme conditions. When too many requests are sent simultaneously, LLMs often become unresponsive which leads to performance degradation and reduced reliability in real-world applications. To address this issue, this paper proposes a queue-based system that separates request handling from direct execution. By implementing a distributed queue, requests are processed in a structured and controlled manner, preventing system overload and ensuring stable performance. This approach also allows for dynamic scalability, meaning additional resources can be allocated as needed to maintain efficiency. Our experimental results show that this method significantly improves resilience under heavy workloads which prevents resource exhaustion and enables linear scalability. The findings highlight the effectiveness of a queue-based web service in ensuring LLMs remain responsive even under extreme workloads. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026
  6. The increasing use of high-dimensional imaging in medical AI raises significant privacy and security concerns. This paper presents a Bootstrap Your Own Latent (BYOL)-based self supervised learning (SSL) framework for secure image processing, ensuring compliance with HIPAA and privacy-preserving machine learning (PPML) techniques. Our method integrates federated learning, homomorphic encryption, and differential privacy to enhance security while reducing dependence on labeled data. Experimental results on the MNIST and NIH Chest Xray datasets demonstrate a classification accuracy of 97.5% and 99.99% (pre-fine-tuning 40%), with improved clustering performance using K-Means (Silhouette Score: 0.5247). These findings validate BYOL’s capability for robust, privacy-preserving image processing while emphasizing the need for fine-tuning to optimize classification performance. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026
  7. Large language models (LLMs) are becoming a popular tool as they have significantly advanced in their capability to tackle a wide range of language-based tasks. However, LLMs applications are highly vulnerable to prompt injection attacks, which poses a critical problem. These attacks target LLMs applications through using carefully designed input prompts to divert the model from adhering to original instruction, thereby it could execute unintended actions. These manipulations pose serious security threats which potentially results in data leaks, biased outputs, or harmful responses. This project explores the security vulnerabilities in relation to prompt injection attacks. To detect whether a prompt is vulnerable or not, we follows two approaches: 1) a pre-trained LLM, and 2) a fine-tuned LLM. Then, we conduct a thorough analysis and comparison of the classification performance. Firstly, we use pre-trained XLMRoBERTa model to detect prompt injections using test dataset without any fine-tuning and evaluate it by zero-shot classification. Then, this proposed work will apply supervised fine-tuning to this pre-trained LLM using a task-specific labeled dataset from deep set in huggingface, and this fine-tuned model achieves impressive results with 99.13% accuracy, 100% precision, 98.33% recall and 99.15% F1-score thorough rigorous experimentation and evaluation. We observe that our approach is highly efficient in detecting prompt injection attacks. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026
  8. This work introduces an novel approach to improving cybersecurity systems to focus on spam email-based cyberattacks. The proposed technique tackles the challenge of training Machine Learning (ML) models with limited data samples by leveraging Bidirectional Encoder Representations from Transformers (BERT) for contextualized embeddings. Unlike traditional embedding methods, BERT offers a nuanced representation of smaller datasets, enabling more effective ML model training. The methodology will use several pre-trained BERT models for generating contextualized embeddings using data samples, and these embeddings will be fed to various ML algorithms for effective training. This approach demonstrates that even with scarce data, BERT embeddings significantly enhance model performance compared to conventional embedding approaches like Word2Vec. The technique proves especially advantageous for insufficient instances of high-quality dataset. The result of this proposed work outperforms traditional techniques to mitigate phishing attacks with few data samples. This work provides a robust accuracy of 99.25% when we use multilingual BERT (M-BERT) to embed dataset. 
    more » « less
    Free, publicly-accessible full text available May 5, 2026
  9. Free, publicly-accessible full text available April 27, 2026
  10. In today’s fast-paced software development environments, DevOps has revolutionized the way teams build, test, and deploy applications by emphasizing automation, collaboration, and continuous integration/continuous delivery (CI/CD). However, with these advancements comes an increased need to address security proactively, giving rise to the DevSecOps movement, which integrates security practices into every phase of the software development lifecycle. DevOps security remains underrepresented in academic curricula despite its growing importance in the industry. To address this gap, this paper presents a handson learning module that combines Chaos Engineering and Whitebox Fuzzing to teach core principles of secure DevOps practices in an authentic, scenario-driven environment. Chaos Engineering allows students to intentionally disrupt systems to observe and understand their resilience, while White-box Fuzzing enables systematic exploration of internal code paths to discover cornercase vulnerabilities that typical tests might miss. The module was deployed across three academic institutions, and both pre- and post-surveys were conducted to evaluate its impact. Pre-survey data revealed that while most students had prior experience in software engineering and cybersecurity, the majority lacked exposure to DevOps security concepts. Post-survey responses gathered through ten structured questions showed highly positive feedback 66.7% of students strongly agreed, and 22.2% agreed that the hands-on labs improved their understanding of secure DevOps practices. Participants also reported increased confidence in secure coding, vulnerability detection, and resilient infrastructure design. These findings support the integration of experiential learning techniques like chaos simulations and white-box fuzzing into security education. By aligning academic training with realworld industry needs, this module effectively prepares students for the complex challenges of modern software development and operations. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026