skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Shahriar, Hossain"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The rapid advancement of Quantum Machine Learning (QML) has introduced new possibilities and challenges in the field of cybersecurity. Generative Adversarial Networks (GANs) have been used as promising tools in Machine Learning (ML) and QML for generating realistic synthetic data from existing (real) dataset which aids in the analysis, detection, and protection against adversarial attacks. In fact, Quantum Generative Adversarial Networks (QGANs) has great ability for numerical data as well as image data generation which have high-dimensional features using the property of quantum superposition. However, effectively loading datasets onto quantum computers encounters significant obstacles due to losses and inherent noise which affects performance. In this work, we study the impact of various losses during training of QGANs as well as GANs for various state-of-the-art cybersecurity datasets. This paper presents a comparative analysis of the stability of loss functions for real datasets as well as GANs generated synthetic dataset. Therefore, we conclude that QGANs demonstrate superior stability and maintain consistently lower generator loss values than traditional machine learning approaches like GANs. Consequently, experimental results indicate that the stability of the loss function is more pronounced for QGANs than GANs. 
    more » « less
  2. The integration of quantum computing with knowledge graphs presents a transformative approach to intelligent information processing that enables enhanced reasoning, semantic understanding, and large-scale data inference. This study introduces a Quantum Knowledge Graph (QKG) framework that combines Neo4j’s LLM Knowledge Graph Builder with Quantum Natural Language Processing (QNLP) to improve the representation, retrieval, and inference of complex knowledge structures. The proposed methodology involves extracting structured relationships from unstructured text, converting them into quantum-compatible representations using Lambeq, and executing quantum circuits via Qiskit to compute quantum embeddings. Using superposition and entanglement, the QKG framework enables parallel relationship processing, contextual entity disambiguation, and more efficient semantic association. These enhancements address the limitations of classical knowledge graphs, such as deterministic representations, scalability constraints, and inefficiencies in the capture of complex relationships. This research highlights the importance of integrating quantum computing with knowledge graphs, offering a scalable, adaptive, and semantically enriched approach to intelligent data processing. 
    more » « less
  3. This work introduces a novel physics-informed neural network (PINN)-based framework for modeling and optimizing false data injection (FDI) attacks on electric vehicle charging station (EVCS) networks, with a focus on centralized charging management system (CMS). By embedding the governing physical laws as constraints within the neural network’s loss function, the proposed framework enables scalable, real-time analysis of cyber-physical vulnerabilities. The PINN models EVCS dynamics under both normal and adversarial conditions while optimizing stealthy attack vectors that exploit voltage and current regulation. Evaluations on the IEEE 33-bus system demonstrate the framework’s capability to uncover critical vulnerabilities. These findings underscore the urgent need for enhanced resilience strategies in EVCS networks to mitigate emerging cyber threats targeting the power grid. Furthermore, the framework lays the groundwork for exploring a broader range of cyber-physical attack scenarios on EVCS networks, offering potential insights into their impact on power grid operations. It provides a flexible platform for studying the interplay between physical constraints and adversarial manipulations, enhancing our understanding of EVCS vulnerabilities. This approach opens avenues for future research into robust mitigation strategies and resilient design principles tailored to the evolving cybersecurity challenges in smart grid systems. 
    more » « less
  4. With the rapid growth of technology, accessing digital health records has become increasingly easier. Especially mobile health technology like mHealth apps help users to manage their health information, as well as store, share and access medical records and treatment information. Along with this huge advancement, mHealth apps are increasingly at risk of exposing protected health information (PHI) when security measures are not adequately implemented. The Health Insurance Portability and Accountability Act (HIPAA) ensures the secure handling of PHI, and mHealth applications are required to comply with its standards. But it is unfortunate to note that many mobile and mHealth app developers, along with their security teams, lack sufficient awareness of HIPAA regulations, leading to inadequate implementation of compliance measures. Moreover, the implementation of HIPAA security should be integrated into applications from the earliest stages of development to ensure data security and regulatory adherence throughout the software lifecycle. This highlights the need for a comprehensive framework that supports developers from the initial stages of mHealth app development and fosters HIPAA compliance awareness among security teams and end users. An iOS framework has been designed for integration into the Integrated Development Environment(IDE), accompanied by a web application to visualize HIPAA security concerns in mHealth app development. The web application is intended to guide both developers and security teams on HIPAA compliance, offering insights on incorporating regulations into source code, with the IDE framework enabling the identification and resolution of compliance violations during development. The aim is to encourage the design of secure and compliant mHealth applications that effectively safeguard personal health information. 
    more » « less
  5. Large Language Models (LLMs) have demonstrated exceptional capabilities in the field of Artificial Intelligence (AI) and are now widely used in various applications globally. However, one of their major challenges is handling high-concurrency workloads, especially under extreme conditions. When too many requests are sent simultaneously, LLMs often become unresponsive which leads to performance degradation and reduced reliability in real-world applications. To address this issue, this paper proposes a queue-based system that separates request handling from direct execution. By implementing a distributed queue, requests are processed in a structured and controlled manner, preventing system overload and ensuring stable performance. This approach also allows for dynamic scalability, meaning additional resources can be allocated as needed to maintain efficiency. Our experimental results show that this method significantly improves resilience under heavy workloads which prevents resource exhaustion and enables linear scalability. The findings highlight the effectiveness of a queue-based web service in ensuring LLMs remain responsive even under extreme workloads. 
    more » « less
  6. This work introduces an novel approach to improving cybersecurity systems to focus on spam email-based cyberattacks. The proposed technique tackles the challenge of training Machine Learning (ML) models with limited data samples by leveraging Bidirectional Encoder Representations from Transformers (BERT) for contextualized embeddings. Unlike traditional embedding methods, BERT offers a nuanced representation of smaller datasets, enabling more effective ML model training. The methodology will use several pre-trained BERT models for generating contextualized embeddings using data samples, and these embeddings will be fed to various ML algorithms for effective training. This approach demonstrates that even with scarce data, BERT embeddings significantly enhance model performance compared to conventional embedding approaches like Word2Vec. The technique proves especially advantageous for insufficient instances of high-quality dataset. The result of this proposed work outperforms traditional techniques to mitigate phishing attacks with few data samples. This work provides a robust accuracy of 99.25% when we use multilingual BERT (M-BERT) to embed dataset. 
    more » « less
  7. Reinforcement learning (RL) is being used more in medical imaging for segmentation, detection, registration, and classification. This survey provides a comprehensive overview of RL techniques applied in this domain, categorizing the literature based on clinical task, imaging modality, learning paradigm, and algorithmic design. We introduce a unified taxonomy that supports reproducibility, highlights design guidance, and identifies underexplored intersections. Furthermore, we examine the integration of Large Language Models (LLMs) for automation and interpretability, and discuss privacy-preserving extensions using Differential Privacy (DP) and Federated Learning (FL). Finally, we address deployment challenges and outline future research directions toward trustworthy and scalable medical RL systems. 
    more » « less
  8. Large language models (LLMs) are becoming a popular tool as they have significantly advanced in their capability to tackle a wide range of language-based tasks. However, LLMs applications are highly vulnerable to prompt injection attacks, which poses a critical problem. These attacks target LLMs applications through using carefully designed input prompts to divert the model from adhering to original instruction, thereby it could execute unintended actions. These manipulations pose serious security threats which potentially results in data leaks, biased outputs, or harmful responses. This project explores the security vulnerabilities in relation to prompt injection attacks. To detect whether a prompt is vulnerable or not, we follows two approaches: 1) a pre-trained LLM, and 2) a fine-tuned LLM. Then, we conduct a thorough analysis and comparison of the classification performance. Firstly, we use pre-trained XLMRoBERTa model to detect prompt injections using test dataset without any fine-tuning and evaluate it by zero-shot classification. Then, this proposed work will apply supervised fine-tuning to this pre-trained LLM using a task-specific labeled dataset from deep set in huggingface, and this fine-tuned model achieves impressive results with 99.13% accuracy, 100% precision, 98.33% recall and 99.15% F1-score thorough rigorous experimentation and evaluation. We observe that our approach is highly efficient in detecting prompt injection attacks. 
    more » « less
  9. The increasing use of high-dimensional imaging in medical AI raises significant privacy and security concerns. This paper presents a Bootstrap Your Own Latent (BYOL)-based self supervised learning (SSL) framework for secure image processing, ensuring compliance with HIPAA and privacy-preserving machine learning (PPML) techniques. Our method integrates federated learning, homomorphic encryption, and differential privacy to enhance security while reducing dependence on labeled data. Experimental results on the MNIST and NIH Chest Xray datasets demonstrate a classification accuracy of 97.5% and 99.99% (pre-fine-tuning 40%), with improved clustering performance using K-Means (Silhouette Score: 0.5247). These findings validate BYOL’s capability for robust, privacy-preserving image processing while emphasizing the need for fine-tuning to optimize classification performance. 
    more » « less