Himalayan lakes represent critical water resources, culturally important waterbodies, and potential hazards. Some of these lakes experience dramatic water-level changes, responding to seasonal monsoon rains and post-monsoonal draining. To address the paucity of direct observations of hydrology in retreating mountain glacial systems, we describe a field program in a series of high altitude lakes in Sagarmatha National Park, adjacent to Ngozumba, the largest glacier in Nepal. In situ observations find extreme (>12 m) seasonal water-level changes in a 60-m deep lateral-moraine-dammed lake (lacking surface outflow), during a 16-month period, equivalent to a 5
Atmospheric rivers (ARs) reach High Mountain Asia (HMA) about 10 days per month during the winter and spring, resulting in about 20 mm day
- NSF-PAR ID:
- 10448171
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Climate Dynamics
- Volume:
- 62
- Issue:
- 1
- ISSN:
- 0930-7575
- Format(s):
- Medium: X Size: p. 589-607
- Size(s):
- ["p. 589-607"]
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract m$$\times 10^6$$ volume change annually. The water column thermal structure was also monitored over the same period. A hydraulic model is constructed, validated against observed water levels, and used to estimate hydraulic conductivities of the moraine soils damming the lake and improves our understanding of this complex hydrological system. Our findings indicate that lake level compared to the damming glacier surface height is the key criterion for large lake fluctuations, while lakes lying below the glacier surface, regulated by surface outflow, possess only minor seasonal water-level fluctuations. Thus, lakes adjacent to glaciers may exhibit very different filling/draining dynamics based on presence/absence of surface outflows and elevation relative to retreating glaciers, and consequently may have very different fates in the next few decades as the climate warms.$$^3$$ -
Abstract We develop a Newtonian model of a deep tidal disruption event (TDE), for which the pericenter distance of the star,
r p, is well within the tidal radius of the black hole,r t, i.e., whenβ ≡r t/r p≫ 1. We find that shocks form forβ ≳ 3, but they are weak (with Mach numbers ∼1) for allβ , and that they reach the center of the star prior to the time of maximum adiabatic compression forβ ≳ 10. The maximum density and temperature reached during the TDE follow much shallower relations withβ than the previously predicted and scalings. Belowβ ≃ 10, this shallower dependence occurs because the pressure gradient is dynamically significant before the pressure is comparable to the ram pressure of the free-falling gas, while aboveβ ≃ 10, we find that shocks prematurely halt the compression and yield the scalings and . We find excellent agreement between our results and high-resolution simulations. Our results demonstrate that, in the Newtonian limit, the compression experienced by the star is completely independent of the mass of the black hole. We discuss our results in the context of existing (affine) models, polytropic versus non-polytropic stars, and general relativistic effects, which become important when the pericenter of the star nears the direct capture radius, atβ ∼ 12.5 (2.7) for a solar-like star disrupted by a 106M ⊙(107M ⊙) supermassive black hole. -
Abstract The Dushnik–Miller dimension of a poset
P is the leastd for whichP can be embedded into a product ofd chains. Lewis and Souza isibility order on the interval of integers is bounded above by$$[N/\kappa , N]$$ and below by$$\kappa (\log \kappa )^{1+o(1)}$$ . We improve the upper bound to$$\Omega ((\log \kappa /\log \log \kappa )^2)$$ We deduce this bound from a more general result on posets of multisets ordered by inclusion. We also consider other divisibility orders and give a bound for polynomials ordered by divisibility.$$O((\log \kappa )^3/(\log \log \kappa )^2).$$ -
Abstract We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution
and the corresponding noiseless output distribution$$p_{\text {noisy}}$$ shrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmark$$p_{\text {ideal}}$$ F that measures this correlation behaves as , where$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$ is the probability of error per circuit location and$$\epsilon $$ s is the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution and the uniform distribution$$p_{\text {noisy}}$$ decays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {unif}}$$ . In other words, although at least one local error occurs with probability$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$ , the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$1-F$$ . Thus, the “white-noise approximation” is meaningful when$$O(F\epsilon \sqrt{s})$$ , a quadratically weaker condition than the$$\epsilon \sqrt{s} \ll 1$$ requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$\epsilon s\ll 1$$ , which corresponds to only$$s \ge \Omega (n\log (n))$$ logarithmic depth circuits, and if, additionally, the inverse error rate satisfies , which is needed to ensure errors are scrambled faster than$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ F decays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds. -
Abstract We propose a new measurement of the ratio of positron-proton to electron-proton elastic scattering at DESY. The purpose is to determine the contributions beyond single-photon exchange, which are essential for the Quantum Electrodynamic (QED) description of the most fundamental process in hadronic physics. By utilizing a 20 cm long liquid hydrogen target in conjunction with the extracted beam from the DESY synchrotron, we can achieve an average luminosity of
cm$$2.12\times 10^{35}$$ s$$^{-2}\cdot $$ ($$^{-1}$$ times the luminosity achieved by OLYMPUS). The proposed two-photon exchange experiment (TPEX) entails a commissioning run at a beam energy of 2 GeV, followed by measurements at 3 GeV, thereby providing new data up to$$\approx 200$$ (GeV/$$Q^2=4.6$$ c ) (twice the range of current measurements). We present and discuss the proposed experimental setup, run plan, and expectations.$$^2$$