skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on March 1, 2026

Title: Complexity-Constrained Quantum Thermodynamics
Quantum complexity measures the difficulty of realizing a quantum process, such as preparing a state or implementing a unitary. We present an approach to quantifying the thermodynamic resources required to implement a process if the process’s complexity is restricted. We focus on the prototypical task of information erasure, or Landauer erasure, wherein an n -qubit memory is reset to the all-zero state. We show that the minimum thermodynamic work required to reset an arbitrary state in our model, via a complexity-constrained process, is quantified by the state’s . The complexity entropy therefore quantifies a trade-off between the work cost and complexity cost of resetting a state. If the qubits have a nontrivial (but product) Hamiltonian, the optimal work cost is determined by the . The complexity entropy quantifies the amount of randomness a system appears to have to a computationally limited observer. Similarly, the complexity relative entropy quantifies such an observer’s ability to distinguish two states. We prove elementary properties of the complexity (relative) entropy. In a random circuit—a simple model for quantum chaotic dynamics—the complexity entropy transitions from zero to its maximal value around the time corresponding to the observer’s computational-power limit. Also, we identify information-theoretic applications of the complexity entropy. The complexity entropy quantifies the resources required for data compression if the compression algorithm must use a restricted number of gates. We further introduce a , which arises naturally in a complexity-constrained variant of information-theoretic decoupling. Assuming that this entropy obeys a conjectured chain rule, we show that the entropy bounds the number of qubits that one can decouple from a reference system, as judged by a computationally bounded referee. Overall, our framework extends the resource-theoretic approach to thermodynamics to integrate a notion of , as quantified by . Published by the American Physical Society2025  more » « less
Award ID(s):
2120757
PAR ID:
10592933
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
PRX Quantum
Date Published:
Journal Name:
PRX Quantum
Volume:
6
Issue:
1
ISSN:
2691-3399
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We develop a constructive approach to generate quantum neural networks capable of representing the exact thermal states of all many-body qubit Hamiltonians. The Trotter expansion of the imaginary time propagator is implemented through an exact block encoding by means of a unitary, restricted Boltzmann machine architecture. Marginalization over the hidden-layer neurons (auxiliary qubits) creates the nonunitary action on the visible layer. Then, we introduce a unitary deep Boltzmann machine architecture in which the hidden-layer qubits are allowed to couple laterally to other hidden qubits. We prove that this wave-function is closed under the action of the imaginary time propagator and, more generally, can represent the action of a universal set of quantum gate operations. We provide analytic expressions for the coefficients for both architectures, thus enabling exact network representations of thermal states without stochastic optimization of the network parameters. In the limit of large imaginary time, the yields the ground state of the system. The number of qubits grows linearly with the number of interactions and total imaginary time for a fixed interaction order. Both networks can be readily implemented on quantum hardware via midcircuit measurements of auxiliary qubits. If only one auxiliary qubit is measured and reset, the circuit depth scales linearly with imaginary time and number of interactions, while the width is constant. Alternatively, one can employ a number of auxiliary qubits linearly proportional to the number of interactions, and circuit depth grows linearly with imaginary time only. Every midcircuit measurement has a postselection success probability, and the overall success probability is equal to the product of the probabilities of the midcircuit measurements. Published by the American Physical Society2025 
    more » « less
  2. High-coherence qubits, which can store and manipulate quantum states for long times with low error rates, are necessary building blocks for quantum computers. Here we propose a driven superconducting erasure qubit, the Floquet fluxonium molecule, which minimizes bit-flip rates through disjoint support of its qubit states and suppresses phase flips by a novel second-order insensitivity to flux-noise dephasing. We estimate the bit-flip, phase-flip, and erasure rates through numerical simulations, with predicted coherence times of approximately 50 ms in the computational subspace and erasure lifetimes of about 500 μ s . We also present a protocol for performing high-fidelity single-qubit rotation gates via additional flux modulation, on timescales of roughly 500 ns, and propose a scheme for erasure detection and logical readout. Our results demonstrate the utility of drives for building new qubits that can outperform their static counterparts. Published by the American Physical Society2024 
    more » « less
  3. Recent experimental advances have made it possible to implement logical multiqubit transversal gates on surface codes in a multitude of platforms. A transversal controlled- (t) gate on two surface codes introduces correlated errors across the code blocks and thus requires modified decoding compared to established methods of decoding surface-code quantum memory (SCQM) or lattice-surgery operations. In this work, we examine and benchmark the performance of three different decoding strategies for the t for scalable fault-tolerant quantum computation. In particular, we present a low-complexity decoder based on minimum-weight perfect matching (MWPM) that achieves the same threshold as the SCQM MWPM decoder. We extend our analysis with a study of tailored decoding of a transversal-teleportation circuit, along with a comparison between the performance of lattice-surgery and transversal operations under Pauli- and erasure-noise models. Our investigation builds toward systematic estimation of the cost of implementing large-scale quantum algorithms based on transversal gates in the surface code. Published by the American Physical Society2025 
    more » « less
  4. Establishing limits of entanglement in open quantum systems is a problem of fundamental interest, with strong implications for applications in quantum information science. Here, we study the limits of entanglement stabilization between remote qubits. We theoretically investigate the loss resilience of driven-dissipative entanglement between remote qubits coupled to a chiral waveguide. We find that by coupling a pair of storage qubits to the two driven qubits, the steady state can be tailored such that the storage qubits show a degree of entanglement that is higher than what can be achieved with only two driven qubits coupled to the waveguide. By reducing the degree of entanglement of the driven qubits, we show that the entanglement between the storage qubits becomes more resilient to waveguide loss. Our analytical and numerical results offer insights into how waveguide loss limits the degree of entanglement in this driven-dissipative system, and they offer important guidance for remote entanglement stabilization in the laboratory, for example using superconducting circuits. Published by the American Physical Society2024 
    more » « less
  5. Geometric locality is an important theoretical and practical factor for quantum low-density parity-check (qLDPC) codes that affects code performance and ease of physical realization. For device architectures restricted to two-dimensional (2D) local gates, naively implementing the high-rate codes suitable for low-overhead fault-tolerant quantum computing incurs prohibitive overhead. In this work, we present an error-correction protocol built on a bilayer architecture that aims to reduce operational overheads when restricted to 2D local gates by measuring some generators less frequently than others. We investigate the family of bivariate-bicycle qLDPC codes and show that they are well suited for a parallel syndrome-measurement scheme using fast routing with local operations and classical communication (LOCC). Through circuit-level simulations, we find that in some parameter regimes, bivariate-bicycle codes implemented with this protocol have logical error rates comparable to the surface code while using fewer physical qubits. Published by the American Physical Society2025 
    more » « less