Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
The integration of quantum computing with knowledge graphs presents a transformative approach to intelligent information processing that enables enhanced reasoning, semantic understanding, and large-scale data inference. This study introduces a Quantum Knowledge Graph (QKG) framework that combines Neo4j’s LLM Knowledge Graph Builder with Quantum Natural Language Processing (QNLP) to improve the representation, retrieval, and inference of complex knowledge structures. The proposed methodology involves extracting structured relationships from unstructured text, converting them into quantum-compatible representations using Lambeq, and executing quantum circuits via Qiskit to compute quantum embeddings. Using superposition and entanglement, the QKG framework enables parallel relationship processing, contextual entity disambiguation, and more efficient semantic association. These enhancements address the limitations of classical knowledge graphs, such as deterministic representations, scalability constraints, and inefficiencies in the capture of complex relationships. This research highlights the importance of integrating quantum computing with knowledge graphs, offering a scalable, adaptive, and semantically enriched approach to intelligent data processing.more » « lessFree, publicly-accessible full text available July 8, 2026
-
Quantum-based Machine Learning (QML) combines quantum computing (QC) with machine learning (ML), which can be applied in various sectors, and there is a high demand for QML professionals. However, QML is not yet in many schools’ curricula. We design labware for the basic concepts of QC, ML, and QML and their applications in science and engineering fields in Google Colab, applying a three-stage learning strategy for efficient and effective student learning.more » « lessFree, publicly-accessible full text available February 18, 2026
-
Deep learning (DL) has attracted interest in healthcare for disease diagnosis systems in medical imaging analysis (MedIA) and is especially applicable in Big Data environments like federated learning (FL) and edge computing. However, there is little research into mitigating the vulnerabilities and robustness of such systems against adversarial attacks, which can force DL models to misclassify, leading to concerns about diagnosis accuracy. This paper aims to evaluate the robustness and scalability of DL models for MedIA applications against adversarial attacks while ensuring their applicability in FL settings with Big Data. We fine-tune three state-of-the-art transfer learning models, DenseNet121, MobileNet-V2, and ResNet50, on several MedIA datasets of varying sizes and show that they are effective at disease diagnosis. We then apply the Fast Gradient Sign Method (FGSM) to attack the models and utilize adversarial training (AT) and knowledge distillation to defend them. We provide a performance comparison of the original transfer learning models and the defended models on the clean and perturbed data. The experimental results show that the defensive techniques can improve the robustness of the models to the FGSM attack and be scaled for Big Data as well as utilized for edge computing environments.more » « lessFree, publicly-accessible full text available December 15, 2025
-
Traditional Knowledge Graphs (KGs), such as Neo4j, face challenges in managing high-dimensional relationships and capturing semantic nuances due to their deterministic nature. Quantum Natural Language Processing (QNLP) introduces probabilistic reasoning into the KG context. This integration leverages quantum principles, such as superposition, which allows relationships to exist in multiple states simultaneously, and entanglement, where the state of one entity dynamically influences the state of another. This quantum-based probabilistic reasoning provides a richer, more flexible representation of connections, moving beyond binary relationships to model the nuances and variability of real-world interactions. Our research demonstrates that QNLP enhances Neo4j’s ability to analyze context-rich data, improving tasks like entity extraction nd knowledge inference. By modeling relationship states probabilistically, QNLP addresses limitations in traditional methods, providing nuanced insights and enabling more advanced, contextaware NLP applications.more » « lessFree, publicly-accessible full text available December 15, 2025
-
Machine learning has been successfully applied to big data analytics across various disciplines. However, as data is collected from diverse sectors, much of it is private and confidential. At the same time, one of the major challenges in machine learning is the slow training speed of large models, which often requires high-performance servers or cloud services. To protect data privacy while still allowing model training on such servers, privacy-preserving machine learning using Fully Homomorphic Encryption (FHE) has gained significant attention. However, its widespread adoption is hindered by performance degradation. This paper presents our experiments on training models over encrypted data using FHE. The results show that while FHE ensures privacy, it can significantly degrade performance, requiring complex tuning to optimize.more » « lessFree, publicly-accessible full text available December 15, 2025
-
Residue Number Systems (RNS) demonstrate the fascinating potential to serve integer addition/multiplication-intensive applications. The complexity of Artificial Intelligence (AI) models has grown enormously in recent years. From a computer system’s perspective, ensuring the training of these large-scale AI models within an adequate time and energy consumption has become a big concern. Matrix multiplication is a dominant subroutine in many prevailing AI models, with an addition/multiplication-intensive attribute. However, the data type of matrix multiplication within machine learning training typically requires real numbers, which indicates that RNS benefits for integer applications cannot be directly gained by AI training. The state-of-the-art RNS real number encodings, including floating-point and fixed-point, have defects and can be further enhanced. To transform default RNS benefits to the efficiency of large-scale AI training, we propose a low-cost and high-accuracy RNS fixed-point representation: Single RNS Logical Partition (S-RNS-Logic-P) representation with Scaling Down Postprocessing Multiplication (SD-Post-Mul). Moreover, we extend the implementation details of the other two RNS fixed-point methods: Double RNS Concatenation (D-RNS-Concat) and Single RNS Logical Partition (S-RNS-Logic-P) representation with Scaling Down Preprocessing Multiplication (SD-Pre-Mul). We also design the architectures of these three fixed-point multipliers. In empirical experiments, our S-RNS-Logic-P representation with SD-Post-Mul method achieves less latency and energy overhead while maintaining good accuracy. Furthermore, this method can easily extend to the Redundant Residue Number System (RRNS) to raise the efficiency of error-tolerant domains, such as improving the error correction efficiency of quantum computing.more » « less
-
The success of ChatGPT is reshaping the landscape of the entire IT industry. The large language model (LLM) powering ChatGPT is experiencing rapid development, marked by enhanced features, improved accuracy, and reduced latency. Due to the execution overhead of LLMs, prevailing commercial LLM products typically manage user queries on remote servers. However, the escalating volume of user queries and the growing complexity of LLMs have led to servers becoming bottlenecks, compromising the quality of service (QoS). To address this challenge, a potential solution is to shift LLM inference services to edge devices, a strategy currently being explored by industry leaders such as Apple, Google, Qualcomm, Samsung, and others. Beyond alleviating the computational strain on servers and enhancing system scalability, deploying LLMs at the edge offers additional advantages. These include real-time responses even in the absence of network connectivity and improved privacy protection for customized or personal LLMs. This article delves into the challenges and potential bottlenecks currently hindering the effective deployment of LLMs on edge devices. Through deploying the LLaMa-2 7B model with INT4 quantization on diverse edge devices and systematically analyzing experimental results, we identify insufficient memory and/or computing resources on traditional edge devices as the primary obstacles. Based on our observation and empirical analysis, we further provide insights and design guidance for the next generation of edge devices and systems from both hardware and software directionsmore » « less
An official website of the United States government
