We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution
We address a system of equations modeling a compressible fluid interacting with an elastic body in dimension three. We prove the local existence and uniqueness of a strong solution when the initial velocity belongs to the space
- NSF-PAR ID:
- 10495611
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Journal of Mathematical Fluid Mechanics
- Volume:
- 26
- Issue:
- 2
- ISSN:
- 1422-6928
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract and the corresponding noiseless output distribution$$p_{\text {noisy}}$$ shrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmark$$p_{\text {ideal}}$$ F that measures this correlation behaves as , where$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$ is the probability of error per circuit location and$$\epsilon $$ s is the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution and the uniform distribution$$p_{\text {noisy}}$$ decays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {unif}}$$ . In other words, although at least one local error occurs with probability$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$ , the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$1-F$$ . Thus, the “white-noise approximation” is meaningful when$$O(F\epsilon \sqrt{s})$$ , a quadratically weaker condition than the$$\epsilon \sqrt{s} \ll 1$$ requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$\epsilon s\ll 1$$ , which corresponds to only$$s \ge \Omega (n\log (n))$$ logarithmic depth circuits, and if, additionally, the inverse error rate satisfies , which is needed to ensure errors are scrambled faster than$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ F decays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds. -
Abstract This paper presents a search for dark matter,
, using events with a single top quark and an energetic$$\chi $$ W boson. The analysis is based on proton–proton collision data collected with the ATLAS experiment at 13 TeV during LHC Run 2 (2015–2018), corresponding to an integrated luminosity of 139 fb$$\sqrt{s}=$$ . The search considers final states with zero or one charged lepton (electron or muon), at least one$$^{-1}$$ b -jet and large missing transverse momentum. In addition, a result from a previous search considering two-charged-lepton final states is included in the interpretation of the results. The data are found to be in good agreement with the Standard Model predictions and the results are interpreted in terms of 95% confidence-level exclusion limits in the context of a class of dark matter models involving an extended two-Higgs-doublet sector together with a pseudoscalar mediator particle. The search is particularly sensitive to on-shell production of the charged Higgs boson state, , arising from the two-Higgs-doublet mixing, and its semi-invisible decays via the mediator particle,$$H^{\pm }$$ a : . Signal models with$$H^{\pm } \rightarrow W^\pm a (\rightarrow \chi \chi )$$ masses up to 1.5 TeV and$$H^{\pm }$$ a masses up to 350 GeV are excluded assuming a value of 1. For masses of$$\tan \beta $$ a of 150 (250) GeV, values up to 2 are excluded for$$\tan \beta $$ masses between 200 (400) GeV and 1.5 TeV. Signals with$$H^{\pm }$$ values between 20 and 30 are excluded for$$\tan \beta $$ masses between 500 and 800 GeV.$$H^{\pm }$$ -
Abstract Given a suitable solution
V (t ,x ) to the Korteweg–de Vries equation on the real line, we prove global well-posedness for initial data . Our conditions on$$u(0,x) \in V(0,x) + H^{-1}(\mathbb {R})$$ V do include regularity but do not impose any assumptions on spatial asymptotics. We show that periodic profiles satisfy our hypotheses. In particular, we can treat localized perturbations of the much-studied periodic traveling wave solutions (cnoidal waves) of KdV. In the companion paper Laurens (Nonlinearity. 35(1):343–387, 2022.$$V(0,x)\in H^5(\mathbb {R}/\mathbb {Z})$$ https://doi.org/10.1088/1361-6544/ac37f5 ) we show that smooth step-like initial data also satisfy our hypotheses. We employ the method of commuting flows introduced in Killip and Vişan (Ann. Math. (2) 190(1):249–305, 2019.https://doi.org/10.4007/annals.2019.190.1.4 ) where . In that setting, it is known that$$V\equiv 0$$ is sharp in the class of$$H^{-1}(\mathbb {R})$$ spaces.$$H^s(\mathbb {R})$$ -
Abstract In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include:
Certifying that a list of
n integers has no 3-SUM solution can be done in Merlin–Arthur time . Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n)$$ time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$ and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$ time).$$\tilde{O}(n^{1.5})$$ Counting the number of
k -cliques with total edge weight equal to zero in ann -node graph can be done in Merlin–Arthur time (where$${\tilde{O}}(n^{\lceil k/2\rceil })$$ ). For odd$$k\ge 3$$ k , this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in anm -edge graph can be done in Merlin–Arthur time . Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only count$${\tilde{O}}(m)$$ k -cliques in unweighted graphs, and had worse running times for smallk .Computing the All-Pairs Shortest Distances matrix for an
n -node graph can be done in Merlin–Arthur time . Note this is optimal, as the matrix can have$$\tilde{O}(n^2)$$ nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\Omega (n^2)$$ nondeterministic time algorithm.$$\tilde{O}(n^{2.94})$$ Certifying that an
n -variablek -CNF is unsatisfiable can be done in Merlin–Arthur time . We also observe an algebrization barrier for the previous$$2^{n/2 - n/O(k)}$$ -time Merlin–Arthur protocol of R. Williams [CCC’16] for$$2^{n/2}\cdot \textrm{poly}(n)$$ SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol for$$\#$$ k -UNSAT running in time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.$$2^{n/2}/n^{\omega (1)}$$ Certifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time
. Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{4n/5}\cdot \textrm{poly}(n)$$ time.$$2^{2n/3}\cdot \textrm{poly}(n)$$ n integers can be done in Merlin–Arthur time , improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{n/3}\cdot \textrm{poly}(n)$$ time.$$2^{0.49991n}\cdot \textrm{poly}(n)$$ -
Abstract We consider the problem of covering multiple submodular constraints. Given a finite ground set
N , a weight function ,$$w: N \rightarrow \mathbb {R}_+$$ r monotone submodular functions over$$f_1,f_2,\ldots ,f_r$$ N and requirements the goal is to find a minimum weight subset$$k_1,k_2,\ldots ,k_r$$ such that$$S \subseteq N$$ for$$f_i(S) \ge k_i$$ . We refer to this problem as$$1 \le i \le r$$ Multi-Submod-Cover and it was recently considered by Har-Peled and Jones (Few cuts meet many point sets. CoRR.arxiv:abs1808.03260 Har-Peled and Jones 2018) who were motivated by an application in geometry. Even with$$r=1$$ Multi-Submod-Cover generalizes the well-known Submodular Set Cover problem (Submod-SC ), and it can also be easily reduced toSubmod-SC . A simple greedy algorithm gives an approximation where$$O(\log (kr))$$ and this ratio cannot be improved in the general case. In this paper, motivated by several concrete applications, we consider two ways to improve upon the approximation given by the greedy algorithm. First, we give a bicriteria approximation algorithm for$$k = \sum _i k_i$$ Multi-Submod-Cover that covers each constraint to within a factor of while incurring an approximation of$$(1-1/e-\varepsilon )$$ in the cost. Second, we consider the special case when each$$O(\frac{1}{\epsilon }\log r)$$ is a obtained from a truncated coverage function and obtain an algorithm that generalizes previous work on partial set cover ($$f_i$$ Partial-SC ), covering integer programs (CIPs ) and multiple vertex cover constraints Bera et al. (Theoret Comput Sci 555:2–8 Bera et al. 2014). Both these algorithms are based on mathematical programming relaxations that avoid the limitations of the greedy algorithm. We demonstrate the implications of our algorithms and related ideas to several applications ranging from geometric covering problems to clustering with outliers. Our work highlights the utility of the high-level model and the lens of submodularity in addressing this class of covering problems.