skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Hamiltonian and Liouvillian learning in weakly-dissipative quantum many-body systems
Abstract We discuss Hamiltonian and Liouvillian learning for analog quantum simulation from non-equilibrium quench dynamics in the limit of weakly dissipative many-body systems. We present and compare various methods and strategies to learn the operator content of the Hamiltonian and the Lindblad operators of the Liouvillian. We compare different ansätze based on an experimentally accessible ‘learning error’ which we consider as a function of the number of runs of the experiment. Initially, the learning error decreases with the inverse square root of the number of runs, as the error in the reconstructed parameters is dominated by shot noise. Eventually the learning error remains constant, allowing us to recognize missing ansatz terms. A central aspect of our approaches is to (re-)parametrize ansätze by introducing and varying the dependencies between parameters. This allows us to identify the relevant parameters of the system, thereby reducing the complexity of the learning task. Importantly, this (re-)parametrization relies solely on classical post-processing, which is compelling given the finite amount of data available from experiments. We illustrate and compare our methods with two experimentally relevant spin models.  more » « less
Award ID(s):
2016244
PAR ID:
10589709
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Purpose-Led Publishing
Date Published:
Journal Name:
Quantum Science and Technology
Volume:
10
Issue:
1
ISSN:
2058-9565
Page Range / eLocation ID:
015065
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Geometric morphometrics is used in the biological sciences to quantify morphological traits. However, the need for manual landmark placement hampers scalability, which is both time‐consuming, labor‐intensive, and open to human error. The selected landmarks embody a specific hypothesis regarding the critical geometry relevant to the biological question. Any adjustment to this hypothesis necessitates acquiring a new set of landmarks or revising them significantly, which can be impractical for large datasets. There is a pressing need for more efficient and flexible methods for landmark placement that can adapt to different hypotheses without requiring extensive human effort. This study investigates the precision and accuracy of landmarks derived from functional correspondences obtained through the functional map framework of geometry processing. We utilize a deep functional map network to learn shape descriptors, which enable us to achieve functional map‐based and point‐to‐point correspondences between specimens in our dataset. Our methodology involves automating the landmarking process by interrogating these maps to identify corresponding landmarks, using manually placed landmarks from the entire dataset as a reference. We apply our method to a dataset of rodent mandibles and compare its performance to MALPACA's, a standard tool for automatic landmark placement. Our model demonstrates a speed improvement compared to MALPACA while maintaining a competitive level of accuracy. Although MALPACA typically shows the lowest RMSE, our models perform comparably well, particularly with smaller training datasets, indicating strong generalizability. Visual assessments confirm the precision of our automated landmark placements, with deviations consistently falling within an acceptable range for MALPACA estimates. Our results underscore the potential of unsupervised learning models in anatomical landmark placement, presenting a practical and efficient alternative to traditional methods. Our approach saves significant time and effort and provides the flexibility to adapt to different hypotheses about critical geometrical features without the need for manual re‐acquisition of landmarks. This advancement can significantly enhance the scalability and applicability of geometric morphometrics, making it more feasible for large datasets and diverse biological studies. 
    more » « less
  2. Abstract We construct models for Jupiter’s interior that match the gravity data obtained by the Juno and Galileo spacecraft. To generate ensembles of models, we introduce a novelquadraticMonte Carlo technique, which is more efficient in confining fitness landscapes than the affine invariant method that relies on linear stretch moves. We compare how long it takes the ensembles of walkers in both methods to travel to the most relevant parameter region. Once there, we compare the autocorrelation time and error bars of the two methods. For a ring potential and the 2d Rosenbrock function, we find that our quadratic Monte Carlo technique is significantly more efficient. Furthermore, we modified thewalkmoves by adding a scaling factor. We provide the source code and examples so that this method can be applied elsewhere. Here we employ our method to generate five-layer models for Jupiter’s interior that include winds and a prominent dilute core, which allows us to match the planet’s even and odd gravity harmonics. We compare predictions from the different model ensembles and analyze how much an increase in the temperature at 1 bar and ad hoc change to the equation of state affect the inferred amount of heavy elements in the atmosphere and in the planet overall. 
    more » « less
  3. As the number of pre-trained machine learning (ML) models is growing exponentially, data reduction tools are not catching up. Existing data reduction techniques are not specifically designed for pre-trained model (PTM) dataset files. This is largely due to a lack of understanding of the patterns and characteristics of these datasets, especially those relevant to data reduction and compressibility. This paper presents the first, exhaustive analysis to date of PTM datasets on storage compressibility. Our analysis spans different types of data reduction and compression techniques, from hash-based data deduplication, data similarity detection, to dictionary-coding compression. Our analysis explores these techniques at three data granularity levels, from model layers, model chunks, to model parameters. We draw new observations that indicate that modern data reduction tools are not effective when handling PTM datasets. There is a pressing need for new compression methods that take into account PTMs' data characteristics for effective storage reduction. Motivated by our findings, we design Elf, a simple yet effective, error-bounded, lossy floating-point compression method. Elf transforms floating-point parameters in such a way that the common exponent field of the transformed parameters can be completely eliminated to save storage space. We develop Elves, a compression framework that integrates Elf along with several other data reduction methods. Elves uses the most effective method to compress PTMs that exhibit different patterns. Evaluation shows that Elves achieves an overall compression ratio of 1.52×, which is 1.31×, 1.32× and 1.29× higher than a general-purpose compressor (zstd), an error-bounded lossy compressor (SZ3), and the uniform model quantization, respectively, with negligible model accuracy loss. 
    more » « less
  4. We build upon recent work on the use of machine-learning models to estimate Hamiltonian parameters using continuous weak measurement of qubits as input. We consider two settings for the training of our model: (1) supervised learning, where the weak-measurement training record can be labeled with known Hamiltonian parameters, and (2) unsupervised learning, where no labels are available. The first has the advantage of not requiring an explicit representation of the quantum state, thus potentially scaling very favorably to a larger number of qubits. The second requires the implementation of a physical model to map the Hamiltonian parameters to a measurement record, which we implement using an integrator of the physical model with a recurrent neural network to provide a model-free correction at every time step to account for small effects not captured by the physical model. We test our construction on a system of two qubits and demonstrate accurate prediction of multiple physical parameters in both the supervised context and the unsupervised context. We demonstrate that the model benefits from larger training sets, establishing that it is “learning,” and we show robustness regarding errors in the assumed physical model by achieving accurate parameter estimation in the presence of unanticipated single-particle relaxation. 
    more » « less
  5. Abstract Quantum chemistry is a key application area for noisy‐intermediate scale quantum (NISQ) devices, and therefore serves as an important benchmark for current and future quantum computer performance. Previous benchmarks in this field have focused on variational methods for computing ground and excited states of various molecules, including a benchmarking suite focused on the performance of computing ground states for alkali‐hydrides under an array of error mitigation methods. State‐of‐the‐art methods to reach chemical accuracy in hybrid quantum‐classical electronic structure calculations of alkali hydride molecules on NISQ devices from IBM are outlined here. It is demonstrated how to extend the reach of variational eigensolvers with symmetry preserving Ansätze. Next, it is outlined how to use quantum imaginary time evolution and Lanczos as a complementary method to variational techniques, highlighting the advantages of each approach. Finally, a new error mitigation method is demonstrated which uses systematic error cancellation via hidden inverse gate constructions, improving the performance of typical variational algorithms. These results show that electronic structure calculations have advanced rapidly, to routine chemical accuracy for simple molecules, from their inception on quantum computers a few short years ago, and they point to further rapid progress to larger molecules as the power of NISQ devices grows. 
    more » « less