For a smooth projective variety
We consider a conjecture that identifies two types of base point free divisors on
- NSF-PAR ID:
- 10369886
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Transformation Groups
- ISSN:
- 1083-4362
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract X over an algebraic number fieldk a conjecture of Bloch and Beilinson predicts that the kernel of the Albanese map ofX is a torsion group. In this article we consider a product of smooth projective curves and show that if the conjecture is true for any subproduct of two curves, then it is true for$$X=C_1\times \cdots \times C_d$$ X . For a product of two curves over$$X=C_1\times C_2$$ with positive genus we construct many nontrivial examples that satisfy the weaker property that the image of the natural map$$\mathbb {Q} $$ is finite, where$$J_1(\mathbb {Q})\otimes J_2(\mathbb {Q})\xrightarrow {\varepsilon }{{\,\textrm{CH}\,}}_0(C_1\times C_2)$$ is the Jacobian variety of$$J_i$$ . Our constructions include many new examples of non-isogenous pairs of elliptic curves$$C_i$$ with positive rank, including the first known examples of rank greater than 1. Combining these constructions with our previous result, we obtain infinitely many nontrivial products$$E_1, E_2$$ for which the analogous map$$X=C_1\times \cdots \times C_d$$ has finite image.$$\varepsilon $$ -
Abstract We continue the program of proving circuit lower bounds via circuit satisfiability algorithms. So far, this program has yielded several concrete results, proving that functions in
and other complexity classes do not have small circuits (in the worst case and/or on average) from various circuit classes$\mathsf {Quasi}\text {-}\mathsf {NP} = \mathsf {NTIME}[n^{(\log n)^{O(1)}}]$ , by showing that$\mathcal { C}$ admits non-trivial satisfiability and/or$\mathcal { C}$ # SAT algorithms which beat exhaustive search by a minor amount. In this paper, we present a new strong lower bound consequence of having a non-trivial# SAT algorithm for a circuit class . Say that a symmetric Boolean function${\mathcal C}$ f (x 1,…,x n ) issparse if it outputs 1 onO (1) values of . We show that for every sparse${\sum }_{i} x_{i}$ f , and for all “typical” , faster$\mathcal { C}$ # SAT algorithms for circuits imply lower bounds against the circuit class$\mathcal { C}$ , which may be$f \circ \mathcal { C}$ stronger than itself. In particular:$\mathcal { C}$ # SAT algorithms forn k -size -circuits running in 2$\mathcal { C}$ n /n k time (for allk ) implyN E X P does not have -circuits of polynomial size.$(f \circ \mathcal { C})$ # SAT algorithms for -size$2^{n^{{\varepsilon }}}$ -circuits running in$\mathcal { C}$ time (for some$2^{n-n^{{\varepsilon }}}$ ε > 0) implyQ u a s i -N P does not have -circuits of polynomial size.$(f \circ \mathcal { C})$ Applying
# SAT algorithms from the literature, one immediate corollary of our results is thatQ u a s i -N P does not haveE M A J ∘A C C 0∘T H R circuits of polynomial size, whereE M A J is the “exact majority” function, improving previous lower bounds againstA C C 0[Williams JACM’14] andA C C 0∘T H R [Williams STOC’14], [Murray-Williams STOC’18]. This is the first nontrivial lower bound against such a circuit class. -
Abstract We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution
and the corresponding noiseless output distribution$$p_{\text {noisy}}$$ shrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmark$$p_{\text {ideal}}$$ F that measures this correlation behaves as , where$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$ is the probability of error per circuit location and$$\epsilon $$ s is the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution and the uniform distribution$$p_{\text {noisy}}$$ decays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {unif}}$$ . In other words, although at least one local error occurs with probability$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$ , the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$1-F$$ . Thus, the “white-noise approximation” is meaningful when$$O(F\epsilon \sqrt{s})$$ , a quadratically weaker condition than the$$\epsilon \sqrt{s} \ll 1$$ requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$\epsilon s\ll 1$$ , which corresponds to only$$s \ge \Omega (n\log (n))$$ logarithmic depth circuits, and if, additionally, the inverse error rate satisfies , which is needed to ensure errors are scrambled faster than$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ F decays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds. -
Abstract The free multiplicative Brownian motion
is the large-$$b_{t}$$ N limit of the Brownian motion on in the sense of$$\mathsf {GL}(N;\mathbb {C}),$$ -distributions. The natural candidate for the large-$$*$$ N limit of the empirical distribution of eigenvalues is thus the Brown measure of . In previous work, the second and third authors showed that this Brown measure is supported in the closure of a region$$b_{t}$$ that appeared in the work of Biane. In the present paper, we compute the Brown measure completely. It has a continuous density$$\Sigma _{t}$$ on$$W_{t}$$ which is strictly positive and real analytic on$$\overline{\Sigma }_{t},$$ . This density has a simple form in polar coordinates:$$\Sigma _{t}$$ where$$\begin{aligned} W_{t}(r,\theta )=\frac{1}{r^{2}}w_{t}(\theta ), \end{aligned}$$ is an analytic function determined by the geometry of the region$$w_{t}$$ . We show also that the spectral measure of free unitary Brownian motion$$\Sigma _{t}$$ is a “shadow” of the Brown measure of$$u_{t}$$ , precisely mirroring the relationship between the circular and semicircular laws. We develop several new methods, based on stochastic differential equations and PDE, to prove these results.$$b_{t}$$ -
Abstract We present the first unquenched lattice-QCD calculation of the form factors for the decay
at nonzero recoil. Our analysis includes 15 MILC ensembles with$$B\rightarrow D^*\ell \nu $$ flavors of asqtad sea quarks, with a strange quark mass close to its physical mass. The lattice spacings range from$$N_f=2+1$$ fm down to 0.045 fm, while the ratio between the light- and the strange-quark masses ranges from 0.05 to 0.4. The valence$$a\approx 0.15$$ b andc quarks are treated using the Wilson-clover action with the Fermilab interpretation, whereas the light sector employs asqtad staggered fermions. We extrapolate our results to the physical point in the continuum limit using rooted staggered heavy-light meson chiral perturbation theory. Then we apply a model-independent parametrization to extend the form factors to the full kinematic range. With this parametrization we perform a joint lattice-QCD/experiment fit using several experimental datasets to determine the CKM matrix element . We obtain$$|V_{cb}|$$ . The first error is theoretical, the second comes from experiment and the last one includes electromagnetic and electroweak uncertainties, with an overall$$\left| V_{cb}\right| = (38.40 \pm 0.68_{\text {th}} \pm 0.34_{\text {exp}} \pm 0.18_{\text {EM}})\times 10^{-3}$$ , which illustrates the tensions between the experimental data sets, and between theory and experiment. This result is in agreement with previous exclusive determinations, but the tension with the inclusive determination remains. Finally, we integrate the differential decay rate obtained solely from lattice data to predict$$\chi ^2\text {/dof} = 126/84$$ , which confirms the current tension between theory and experiment.$$R(D^*) = 0.265 \pm 0.013$$