skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Recoverability for Holevo's Just-as-Good Fidelity
Holevo's just-as-good fidelity is a similarity measure for quantum states that has found several applications. One of its critical properties is that it obeys a data processing inequality: the measure does not decrease under the action of a quantum channel on the underlying states. In this paper, I prove a refinement of this data processing inequality that includes an additional term related to recoverability. That is, if the increase in the measure is small after the action of a partial trace, then one of the states can be nearly recovered by the Petz recovery channel, while the other state is perfectly recovered by the same channel. The refinement is given in terms of the trace distance of one of the states to its recovered version and also depends on the minimum eigenvalue of the other state. As such, the refinement is universal, in the sense that the recovery channel depends only on one of the states, and it is explicit, given by the Petz recovery channel. The appendix contains a generalization of the aforementioned result to arbitrary quantum channels.  more » « less
Award ID(s):
1714215
PAR ID:
10104449
Author(s) / Creator(s):
Date Published:
Journal Name:
Proceedings of the 2018 International Symposium on Information Theory
Page Range / eLocation ID:
2331 to 2335
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. It is well-known that any quantum channel E satisfies the data processing inequality (DPI), with respect to various divergences, e.g., quantum χ κ 2 divergences and quantum relative entropy. More specifically, the data processing inequality states that the divergence between two arbitrary quantum states ρ and σ does not increase under the action of any quantum channel E . For a fixed channel E and a state σ , the divergence between output states E ( ρ ) and E ( σ ) might be strictly smaller than the divergence between input states ρ and σ , which is characterized by the strong data processing inequality (SDPI). Among various input states ρ , the largest value of the rate of contraction is known as the SDPI constant. An important and widely studied property for classical channels is that SDPI constants tensorize. In this paper, we extend the tensorization property to the quantum regime: we establish the tensorization of SDPIs for the quantum χ κ 1 / 2 2 divergence for arbitrary quantum channels and also for a family of χ κ 2 divergences (with κ ≥ κ 1 / 2 ) for arbitrary quantum-classical channels. 
    more » « less
  2. null (Ed.)
    The quantum relative entropy is a measure of the distinguishability of two quantum states, and it is a unifying concept in quantum information theory: many information measures such as entropy, conditional entropy, mutual information, and entanglement measures can be realized from it. As such, there has been broad interest in generalizing the notion to further understand its most basic properties, one of which is the data processing inequality. The quantum f-divergence of Petz is one generalization of the quantum relative entropy, and it also leads to other relative entropies, such as the Petz--Renyi relative entropies. In this contribution, I introduce the optimized quantum f-divergence as a related generalization of quantum relative entropy. I prove that it satisfies the data processing inequality, and the method of proof relies upon the operator Jensen inequality, similar to Petz's original approach. Interestingly, the sandwiched Renyi relative entropies are particular examples of the optimized f-divergence. Thus, one benefit of this approach is that there is now a single, unified approach for establishing the data processing inequality for both the Petz--Renyi and sandwiched Renyi relative entropies, for the full range of parameters for which it is known to hold. 
    more » « less
  3. Trace inequalities are general techniques with many applications in quantum information theory, often replacing the classical functional calculus in noncommutative settings. The physics of quantum field theory and holography, however, motivates entropy inequalities in type III von Neumann algebras that lack a semifinite trace. The Haagerup and Kosaki Lp spaces enable re-expressing trace inequalities in non-tracial von Neumann algebras. In particular, we show this for the generalized Araki–Lieb–Thirring and Golden–Thompson inequalities from the work of Sutter et al. [Commun. Math. Phys. 352(1), 37 (2017)]. Then, using the Haagerup approximation method, we prove a general von Neumann algebra version of universal recovery map corrections to the data processing inequality for relative entropy. We also show subharmonicity of a logarithmic p-fidelity of recovery. Furthermore, we prove that the non-decrease of relative entropy is equivalent to the existence of an L1-isometry implementing the channel on both input states. 
    more » « less
  4. We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes an input model where one is given access to an oracle to the entries of the matrices at unit cost. We show that it has run time O~(s^2 (sqrt{m} epsilon^{-10} + sqrt{n} epsilon^{-12})), with epsilon the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms (when m ~~ n). The second algorithm assumes a fully quantum input model in which the input matrices are given as quantum states. We show that its run time is O~(sqrt{m}+poly(r))*poly(log m,log n,B,epsilon^{-1}), with B an upper bound on the trace-norm of all input matrices. In particular the complexity depends only polylogarithmically in n and polynomially in r. We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state rho with rank at most r, we show we can find in time sqrt{m}*poly(log m,log n,r,epsilon^{-1}) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as rho on the m measurements, up to error epsilon. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians, given quantum states encoding these Hamiltonians, with a poly-logarithmic dependence on its dimension, which is based on ideas developed in quantum principal component analysis. We also develop a "fast" quantum OR lemma with a quadratic improvement in gate complexity over the construction of Harrow et al. [Harrow et al., 2017]. We believe both techniques might be of independent interest. 
    more » « less
  5. Quantum entanglement is a fundamental property of quantum mechanics. Recently, studies have explored entanglement in the$$ t\overline{t} $$ t t ¯ system at the Large Hadron Collider (LHC) when both the top quark and anti-top quark decay leptonically. Entanglement is detected via correlations between the polarizations of the top and anti-top and these polarizations are measured through the angles of the decay products of the top and anti-top. In this work, we propose searching for evidence of quantum entanglement in the semi-leptonic decay channel where the final state includes one lepton, one neutrino, twob-flavor tagged jets, and two light jets from theWdecay. We find that this channel is both easier to reconstruct and has a larger effective quantity of data than the fully leptonic channel. As a result, the semi-leptonic channel is 60% more sensitive to quantum entanglement and a factor of 3 more sensitive to Bell inequality violation, compared to the leptonic channel. In 139 fb−1(3 ab−1) of data at the LHC (HL-LHC), it should be feasible to measure entanglement at a precision of ≲ 3% (0.7%). Detecting Bell inequality violation, on the other hand, is more challenging. With 300 fb−1(3 ab−1) of integrated luminosity at the LHC Run-3 (HL-LHC), we expect a sensitivity of 1.3σ(4.1σ). In our study, we utilize a realistic parametric fitting procedure to optimally recover the true angular distributions from detector effects. Compared to unfolding this procedure yields more stable results. 
    more » « less