skip to main content


Title: Hamiltonian open quantum system toolkit
Abstract

We present an open-source software package called “Hamiltonian Open Quantum System Toolkit (HOQST), a collection of tools for the investigation of open quantum system dynamics in Hamiltonian quantum computing, including both quantum annealing and the gate-model of quantum computing. It features the key master equations (MEs) used in the field, suitable for describing the reduced system dynamics of an arbitrary time-dependent Hamiltonian with either weak or strong coupling to infinite-dimensional quantum baths. We present an overview of the theories behind the various MEs and provide examples to illustrate typical workflows in HOQST. We present an example that shows that HOQST can provide order of magnitude speedups compared to “Quantum Toolbox in Python (QuTiP), for problems with time-dependent Hamiltonians. The package is ready to be deployed on high performance computing (HPC) clusters and is aimed at providing reliable open-system analysis tools for noisy intermediate-scale quantum (NISQ) devices.

 
more » « less
Award ID(s):
1936388
NSF-PAR ID:
10366773
Author(s) / Creator(s):
;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Communications Physics
Volume:
5
Issue:
1
ISSN:
2399-3650
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Unlike their fermionic counterparts, the dynamics of Hermitian quadratic bosonic Hamiltonians are governed by a generally non-Hermitian Bogoliubov-de Gennes effective Hamiltonian. This underlying non-Hermiticity gives rise to adynamically stableregime, whereby all observables undergo bounded evolution in time, and adynamically unstableone, whereby evolution is unbounded for at least some observables. We show that stability-to-instability transitions may be classified in terms of a suitablygeneralizedPTsymmetry, which can be broken when diagonalizability is lost at exceptional points in parameter space, but also when degenerate real eigenvalues split off the real axis while the system remains diagonalizable. By leveraging tools from Krein stability theory in indefinite inner-product spaces, we introduce an indicator of stability phase transitions, which naturally extends the notion of phase rigidity from non-Hermitian quantum mechanics to the bosonic setting. As a paradigmatic example, we fully characterize the stability phase diagram of a bosonic analogue to the Kitaev–Majorana chain under a wide class of boundary conditions. In particular, we establish a connection between phase-dependent transport properties and the onset of instability, and argue that stable regions in parameter space become of measure zero in the thermodynamic limit. Our analysis also reveals that boundary conditions that support Majorana zero modes in the fermionic Kitaev chain are precisely the same that support stability in the bosonic chain.

     
    more » « less
  2. Abstract

    We report the implementation of a hierarchical equations of motion (HEOM) module within the open‐source Libra software. It includes the standard and scaled HEOM algorithms for computing the dynamics of open quantum systems interacting with a harmonic bath. The module allows the computing of the evolution of the reduced density matrix, as well as spectral lineshapes. The truncation, filtering, and “update list” schemes, as well as OpenMP parallelization, allow for further computational saving. The package is written in a mix of C++ and Python languages, delivering the best compromise between user friendliness and efficiency. The Python layer of the package takes advantage of standard Python libraries, such as h5py, which allows efficient storage and retrieval of the generated results. The package can be seamlessly used within Jupyter notebooks; its careful design shall provide the maximal convenience and intuitiveness to its users.

     
    more » « less
  3. Abstract Background

    Statistical geneticists employ simulation to estimate the power of proposed studies, test new analysis tools, and evaluate properties of causal models. Although there are existing trait simulators, there is ample room for modernization. For example, most phenotype simulators are limited to Gaussian traits or traits transformable to normality, while ignoring qualitative traits and realistic, non-normal trait distributions. Also, modern computer languages, such as Julia, that accommodate parallelization and cloud-based computing are now mainstream but rarely used in older applications. To meet the challenges of contemporary big studies, it is important for geneticists to adopt new computational tools.

    Results

    We present , an open-source Julia package that makes it trivial to quickly simulate phenotypes under a variety of genetic architectures. This package is integrated into our OpenMendel suite for easy downstream analyses. Julia was purpose-built for scientific programming and provides tremendous speed and memory efficiency, easy access to multi-CPU and GPU hardware, and to distributed and cloud-based parallelization. is designed to encourage flexible trait simulation, including via the standard devices of applied statistics, generalized linear models (GLMs) and generalized linear mixed models (GLMMs). also accommodates many study designs: unrelateds, sibships, pedigrees, or a mixture of all three. (Of course, for data with pedigrees or cryptic relationships, the simulation process must include the genetic dependencies among the individuals.) We consider an assortment of trait models and study designs to illustrate integrated simulation and analysis pipelines. Step-by-step instructions for these analyses are available in our electronic Jupyter notebooks on Github. These interactive notebooks are ideal for reproducible research.

    Conclusion

    The package has three main advantages. (1) It leverages the computational efficiency and ease of use of Julia to provide extremely fast, straightforward simulation of even the most complex genetic models, including GLMs and GLMMs. (2) It can be operated entirely within, but is not limited to, the integrated analysis pipeline of OpenMendel. And finally (3), by allowing a wider range of more realistic phenotype models, brings power calculations and diagnostic tools closer to what investigators might see in real-world analyses.

     
    more » « less
  4. Abstract

    In silico materials design is hampered by the computational complexity of Kohn–Sham DFT, which scales cubically with the system size. Owing to the development of new‐generation kinetic energy density functionals (KEDFs), orbital‐free DFT (OFDFT) can now be successfully applied to a large class of semiconductors and such finite systems as quantum dots and metal clusters. In this work, we present DFTpy, an open‐source software implementing OFDFT written entirely in Python 3 and outsourcing the computationally expensive operations to third‐party modules, such as NumPy and SciPy. When fast simulations are in order, DFTpy exploits the fast Fourier transforms from PyFFTW. New‐generation, nonlocal and density‐dependent‐kernel KEDFs are made computationally efficient by employing linear splines and other methods for fast kernel builds. We showcase DFTpy by solving for the electronic structure of a million‐atom system of aluminum metal which was computed on a single CPU. The Python 3 implementation is object‐oriented, opening the door to easy implementation of new features. As an example, we present a time‐dependent OFDFT implementation (hydrodynamic DFT) which we use to compute the spectra of small metal clusters recovering qualitatively the time‐dependent Kohn–Sham DFT result. The Python codebase allows for easy implementation of application programming interfaces. We showcase the combination of DFTpy and ASE for molecular dynamics simulations of liquid metals. DFTpy is released under the MIT license.

    This article is categorized under:

    Software > Quantum Chemistry

    Electronic Structure Theory > Density Functional Theory

    Data Science > Computer Algorithms and Programming

     
    more » « less
  5. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should produce identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues. 
    more » « less