skip to main content


Search for: All records

Creators/Authors contains: "Nguyen, Tu N."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Quantum computing is gaining momentum in revolutionizing the way we approach complex problem-solving. However, the practical implementation of quantum algorithms remains a significant challenge due to the error-prone and hardware limits of near-term quantum devices. For instance, physical qubit connections are limited, which necessitates the use of quantum SWAP gates to dynamically transform the logical topology during execution. In addition, to optimize fidelity, it is essential to ensure that 1) the allocated hardware has a low error rate and 2) the number of SWAP gates injected into the circuit is minimized. To address these challenges, we propose a suite of algorithms: the Fidelity-aware Graph Extraction Algorithm (FGEA) is used to identify the hardware region with the lowest probability of error, the Frequency-based Mapping Algorithm (FMA) allocates logical-physical qubits that reduce the potential distance of topological transformation, and the Heuristic Routing Algorithm (HRA) searches for an optimal swapping injection strategy. We evaluate the proposed algorithms on the IBM-provided Noisy Intermediate-Scale Quantum (NISQ) computer, using a dataset consisting of 17 different quantum circuits of various sizes. The circuits are executed on the IBM Toronto Falcon processor. The three proposed algorithms outperform the existing SABRE algorithm in reducing the number of SWAP gates required. Therefore, our proposed algorithms hold significant promise in enhancing the fidelity and reducing the number of SWAP gates required in implementing Quantum algorithms. 
    more » « less
  2. Quantum annealing (QA) is a promising optimization technique used to find global optimal solution of a combinatorial optimization problem by leveraging quantum fluctuations. In QA, the problem being solved is mapped onto the quantum processing unit (QPU) composed of qubits through a procedure called minor-embedding. The qubits are connected by a network of couplers, which determine the strength of the interactions between the qubits. The strength of the couplers that connect qubits within a chain is often referred to as the chain strength. The appropriate balance of chain strength is equally imperative in enabling the qubits to interact with one another in a way that is strong enough to obtain the optimal solution, but not excessively strong so as not to bias the original problem terms. To this end, we address the problem of identifying the optimal chain strength through the utilization of Path Integral Monte Carlo (PIMC) quantum simulation algorithm. The results indicate that our judicious choice of chain strength parameter facilitates enhancements in quantum annealer performance and solution quality, thereby paving the way for QA to compete with, or potentially outperform, classical optimization algorithms. 
    more » « less
    Free, publicly-accessible full text available September 17, 2024
  3. Recently, with the advent of the Internet of everything and 5G network, the amount of data generated by various edge scenarios such as autonomous vehicles, smart industry, 4K/8K, virtual reality (VR), augmented reality (AR), etc., has greatly exploded. All these trends significantly brought real-time, hardware dependence, low power consumption, and security requirements to the facilities, and rapidly popularized edge computing. Meanwhile, artificial intelligence (AI) workloads also changed the computing paradigm from cloud services to mobile applications dramatically. Different from wide deployment and sufficient study of AI in the cloud or mobile platforms, AI workload performance and their resource impact on edges have not been well understood yet. There lacks an in-depth analysis and comparison of their advantages, limitations, performance, and resource consumptions in an edge environment. In this paper, we perform a comprehensive study of representative AI workloads on edge platforms. We first conduct a summary of modern edge hardware and popular AI workloads. Then we quantitatively evaluate three categories (i.e., classification, image-to-image, and segmentation) of the most popular and widely used AI applications in realistic edge environments based on Raspberry Pi, Nvidia TX2, etc. We find that interaction between hardware and neural network models incurs non-negligible impact and overhead on AI workloads at edges. Our experiments show that performance variation and difference in resource footprint limit availability of certain types of workloads and their algorithms for edge platforms, and users need to select appropriate workload, model, and algorithm based on requirements and characteristics of edge environments. 
    more » « less