skip to main content

Title: Sustainability of soil organic carbon in consolidated gully land in China’s Loess Plateau

Massive gully land consolidation projects, launched in China’s Loess Plateau, aim to restore 2667$$\mathrm{km}^2$$km2agricultural lands in total by consolidating 2026 highly eroded gullies. This effort represents a social engineering project where the economic development and livelihood of the farming families are closely tied to the ability of these emergent landscapes to provide agricultural services. Whether these ‘time zero’ landscapes have the resilience to provide a sustainable soil condition such as soil organic carbon (SOC) content remains unknown. By studying two watersheds, one of which is a control site, we show that the consolidated gully serves as an enhanced carbon sink, where the magnitude of SOC increase rate (1.0$$\mathrm{g\,C}/\mathrm{m}^2/\mathrm{year}$$gC/m2/year) is about twice that of the SOC decrease rate (− 0.5$$\mathrm{g\,C}/\mathrm{m}^2/\mathrm{year}$$gC/m2/year) in the surrounding natural watershed. Over a 50-year co-evolution of landscape and SOC turnover, we find that the dominant mechanisms that determine the carbon cycling are different between the consolidated gully and natural watersheds. In natural watersheds, the flux of SOC transformation is mainly driven by the flux of SOC transport; but in the consolidated gully, the transport has little impact on the transformation. Furthermore, we find that extending the surface carbon residence time has the potential to efficiently enhance carbon sequestration more » from the atmosphere with a rate as high as 8$$\mathrm{g\,C}/\mathrm{m}^2/\mathrm{year}$$gC/m2/yearcompared to the current 0.4$$\mathrm{g\,C}/\mathrm{m}^2/\mathrm{year}$$gC/m2/year. The success for the completion of all gully consolidation would lead to as high as 26.67$$\mathrm{Gg\,C}/\mathrm{year}$$GgC/yearsequestrated into soils. This work, therefore, not only provides an assessment and guidance of the long-term sustainability of the ‘time zero’ landscapes but also a solution for sequestration$$\hbox {CO}_2$$CO2into soils.

« less
; ; ; ; ; ; ;
Publication Date:
Journal Name:
Scientific Reports
Nature Publishing Group
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    We prove that$${{\,\textrm{poly}\,}}(t) \cdot n^{1/D}$$poly(t)·n1/D-depth local random quantum circuits with two qudit nearest-neighbor gates on aD-dimensional lattice withnqudits are approximatet-designs in various measures. These include the “monomial” measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was$${{\,\textrm{poly}\,}}(t)\cdot n$$poly(t)·ndue to Brandão–Harrow–Horodecki (Commun Math Phys 346(2):397–434, 2016) for$$D=1$$D=1. We also improve the “scrambling” and “decoupling” bounds for spatially local random circuits due to Brown and Fawzi (Scrambling speed of random quantum circuits, 2012). One consequence of our result is that assuming the polynomial hierarchy ($${{\,\mathrm{\textsf{PH}}\,}}$$PH) is infinite and that certain counting problems are$$\#{\textsf{P}}$$#P-hard “on average”, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constant-depth quantum circuits was known to be hard for classical computers under these assumptions. However the standard strategy for extending this hardness result to approximate sampling requires the quantum circuits to have a property called “anti-concentration”, meaning roughly that the output has near-maximal entropy. Unitary 2-designs have the desired anti-concentration property. Our result improves the required depth for this level of anti-concentration from linear depthmore »to a sub-linear value, depending on the geometry of the interactions. This is relevant to a recent experiment by the Google Quantum AI group to perform such a sampling task with 53 qubits on a two-dimensional lattice (Arute in Nature 574(7779):505–510, 2019; Boixo et al. in Nate Phys 14(6):595–600, 2018) (and related experiments by USTC), and confirms their conjecture that$$O(\sqrt{n})$$O(n)depth suffices for anti-concentration. The proof is based on a previous construction oft-designs by Brandão et al. (2016), an analysis of how approximate designs behave under composition, and an extension of the quasi-orthogonality of permutation operators developed by Brandão et al. (2016). Different versions of the approximate design condition correspond to different norms, and part of our contribution is to introduce the norm corresponding to anti-concentration and to establish equivalence between these various norms for low-depth circuits. For random circuits with long-range gates, we use different methods to show that anti-concentration happens at circuit size$$O(n\ln ^2 n)$$O(nln2n)corresponding to depth$$O(\ln ^3 n)$$O(ln3n). We also show a lower bound of$$\Omega (n \ln n)$$Ω(nlnn)for the size of such circuit in this case. We also prove that anti-concentration is possible in depth$$O(\ln n \ln \ln n)$$O(lnnlnlnn)(size$$O(n \ln n \ln \ln n)$$O(nlnnlnlnn)) using a different model.

    « less
  2. Abstract

    We present a proof of concept for a spectrally selective thermal mid-IR source based on nanopatterned graphene (NPG) with a typical mobility of CVD-grown graphene (up to 3000$$\hbox {cm}^2\,\hbox {V}^{-1}\,\hbox {s}^{-1}$$cm2V-1s-1), ensuring scalability to large areas. For that, we solve the electrostatic problem of a conducting hyperboloid with an elliptical wormhole in the presence of anin-planeelectric field. The localized surface plasmons (LSPs) on the NPG sheet, partially hybridized with graphene phonons and surface phonons of the neighboring materials, allow for the control and tuning of the thermal emission spectrum in the wavelength regime from$$\lambda =3$$λ=3to 12$$\upmu$$μm by adjusting the size of and distance between the circular holes in a hexagonal or square lattice structure. Most importantly, the LSPs along with an optical cavity increase the emittance of graphene from about 2.3% for pristine graphene to 80% for NPG, thereby outperforming state-of-the-art pristine graphene light sources operating in the near-infrared by at least a factor of 100. According to our COMSOL calculations, a maximum emission power per area of$$11\times 10^3$$11×103W/$$\hbox {m}^2$$m2at$$T=2000$$T=2000K for a bias voltage of$$V=23$$V=23V is achieved by controlling the temperature of the hot electrons through the Joule heating. By generalizing Planck’s theory to any grey body and derivingmore »the completely general nonlocal fluctuation-dissipation theorem with nonlocal response of surface plasmons in the random phase approximation, we show that the coherence length of the graphene plasmons and the thermally emitted photons can be as large as 13$$\upmu$$μm and 150$$\upmu$$μm, respectively, providing the opportunity to create phased arrays made of nanoantennas represented by the holes in NPG. The spatial phase variation of the coherence allows for beamsteering of the thermal emission in the range between$$12^\circ$$12and$$80^\circ$$80by tuning the Fermi energy between$$E_F=1.0$$EF=1.0eV and$$E_F=0.25$$EF=0.25eV through the gate voltage. Our analysis of the nonlocal hydrodynamic response leads to the conjecture that the diffusion length and viscosity in graphene are frequency-dependent. Using finite-difference time domain calculations, coupled mode theory, and RPA, we develop the model of a mid-IR light source based on NPG, which will pave the way to graphene-based optical mid-IR communication, mid-IR color displays, mid-IR spectroscopy, and virus detection.

    « less
  3. Abstract

    Let$$\phi $$ϕbe a positive map from the$$n\times n$$n×nmatrices$$\mathcal {M}_n$$Mnto the$$m\times m$$m×mmatrices$$\mathcal {M}_m$$Mm. It is known that$$\phi $$ϕis 2-positive if and only if for all$$K\in \mathcal {M}_n$$KMnand all strictly positive$$X\in \mathcal {M}_n$$XMn,$$\phi (K^*X^{-1}K) \geqslant \phi (K)^*\phi (X)^{-1}\phi (K)$$ϕ(KX-1K)ϕ(K)ϕ(X)-1ϕ(K). This inequality is not generally true if$$\phi $$ϕis merely a Schwarz map. We show that the corresponding tracial inequality$${{\,\textrm{Tr}\,}}[\phi (K^*X^{-1}K)] \geqslant {{\,\textrm{Tr}\,}}[\phi (K)^*\phi (X)^{-1}\phi (K)]$$Tr[ϕ(KX-1K)]Tr[ϕ(K)ϕ(X)-1ϕ(K)]holds for a wider class of positive maps that is specified here. We also comment on the connections of this inequality with various monotonicity statements that have found wide use in mathematical physics, and apply it, and a close relative, to obtain some new, definitive results.

  4. Abstract

    Consider a quantum cat mapMassociated with a matrix $$A\in {{\,\textrm{Sp}\,}}(2n,{\mathbb {Z}})$$ASp(2n,Z), which is a common toy model in quantum chaos. We show that the mass of eigenfunctions of Mon any nonempty open set in the position–frequency space satisfies a lower bound which is uniform in the semiclassical limit, under two assumptions: (1) there is a unique simple eigenvalue ofAof largest absolute value and (2) the characteristic polynomial ofAis irreducible over the rationals. This is similar to previous work (Dyatlov and Jin in Acta Math 220(2):297–339, 2018; Dyatlov et al. in J Am Math Soc 35(2):361–465, 2022) on negatively curved surfaces and (Schwartz in The full delocalization of eigenstates for the quantized cat map, 2021) on quantum cat maps with$$n=1$$n=1, but this paper gives the first results of this type which apply in any dimension. When condition (2) fails we provide a weaker version of the result and discuss relations to existing counterexamples. We also obtain corresponding statements regarding semiclassical measures and damped quantum cat maps.

  5. Abstract

    In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include:

    Certifying that a list ofnintegers has no 3-SUM solution can be done in Merlin–Arthur time$$\tilde{O}(n)$$O~(n). Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n^{1.5})$$O~(n1.5)time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$O~(n1.5)and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$O~(n1.5)time).

    Counting the number ofk-cliques with total edge weight equal to zero in ann-node graph can be done in Merlin–Arthur time$${\tilde{O}}(n^{\lceil k/2\rceil })$$O~(nk/2)(where$$k\ge 3$$k3). For oddk, this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in anm-edge graph can be done in Merlin–Arthur time$${\tilde{O}}(m)$$O~(m). Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only countk-cliques in unweighted graphs, and had worse running times for smallk.

    Computing the All-Pairsmore »Shortest Distances matrix for ann-node graph can be done in Merlin–Arthur time$$\tilde{O}(n^2)$$O~(n2). Note this is optimal, as the matrix can have$$\Omega (n^2)$$Ω(n2)nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\tilde{O}(n^{2.94})$$O~(n2.94)nondeterministic time algorithm.

    Certifying that ann-variablek-CNF is unsatisfiable can be done in Merlin–Arthur time$$2^{n/2 - n/O(k)}$$2n/2-n/O(k). We also observe an algebrization barrier for the previous$$2^{n/2}\cdot \textrm{poly}(n)$$2n/2·poly(n)-time Merlin–Arthur protocol of R. Williams [CCC’16] for$$\#$$#SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol fork-UNSAT running in$$2^{n/2}/n^{\omega (1)}$$2n/2/nω(1)time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.

    Certifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time$$2^{4n/5}\cdot \textrm{poly}(n)$$24n/5·poly(n). Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{2n/3}\cdot \textrm{poly}(n)$$22n/3·poly(n)time.

    Due to the centrality of these problems in fine-grained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution tonintegers can be done in Merlin–Arthur time$$2^{n/3}\cdot \textrm{poly}(n)$$2n/3·poly(n), improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{0.49991n}\cdot \textrm{poly}(n)$$20.49991n·poly(n)time.

    « less