We perform pathintegral molecular dynamics (PIMD), ringpolymer MD (RPMD), and classical MD simulations of H
Continuous monitoring of blood glucose (BG) levels is a key aspect of diabetes management. Patients with Type1 diabetes (T1D) require an effective tool to monitor these levels in order to make appropriate decisions regarding insulin administration and food intake to keep BG levels in target range. Effectively and accurately predicting future BG levels at multitime steps ahead benefits a patient with diabetes by helping them decrease the risks of extremes in BG including hypo and hyperglycemia. In this study, we present a novel multicomponent deep learning model that predicts the BG levels in a multistep look ahead fashion. The model is evaluated both quantitatively and qualitatively on actual blood glucose data for 97 patients. For the prediction horizon (PH) of 30 mins, the average values for
 Award ID(s):
 1910539
 NSFPAR ID:
 10360998
 Publisher / Repository:
 Nature Publishing Group
 Date Published:
 Journal Name:
 Scientific Reports
 Volume:
 11
 Issue:
 1
 ISSN:
 20452322
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract O and D$$_2$$ ${}_{2}$ O using the qTIP4P/F water model over a wide range of temperatures and pressures. The density$$_2$$ ${}_{2}$ , isothermal compressibility$$\rho (T)$$ $\rho \left(T\right)$ , and selfdiffusion coefficients$$\kappa _T(T)$$ ${\kappa}_{T}\left(T\right)$D (T ) of H O and D$$_2$$ ${}_{2}$ O are in excellent agreement with available experimental data; the isobaric heat capacity$$_2$$ ${}_{2}$ obtained from PIMD and MD simulations agree qualitatively well with the experiments. Some of these thermodynamic properties exhibit anomalous maxima upon isobaric cooling, consistent with recent experiments and with the possibility that H$$C_P(T)$$ ${C}_{P}\left(T\right)$ O and D$$_2$$ ${}_{2}$ O exhibit a liquidliquid critical point (LLCP) at low temperatures and positive pressures. The data from PIMD/MD for H$$_2$$ ${}_{2}$ O and D$$_2$$ ${}_{2}$ O can be fitted remarkably well using the TwoStateEquationofState (TSEOS). Using the TSEOS, we estimate that the LLCP for qTIP4P/F H$$_2$$ ${}_{2}$ O, from PIMD simulations, is located at$$_2$$ ${}_{2}$ MPa,$$P_c = 167 \pm 9$$ ${P}_{c}=167\pm 9$ K, and$$T_c = 159 \pm 6$$ ${T}_{c}=159\pm 6$ g/cm$$\rho _c = 1.02 \pm 0.01$$ ${\rho}_{c}=1.02\pm 0.01$ . Isotope substitution effects are important; the LLCP location in qTIP4P/F D$$^3$$ ${}^{3}$ O is estimated to be$$_2$$ ${}_{2}$ MPa,$$P_c = 176 \pm 4$$ ${P}_{c}=176\pm 4$ K, and$$T_c = 177 \pm 2$$ ${T}_{c}=177\pm 2$ g/cm$$\rho _c = 1.13 \pm 0.01$$ ${\rho}_{c}=1.13\pm 0.01$ . Interestingly, for the water model studied, differences in the LLCP location from PIMD and MD simulations suggest that nuclear quantum effects (i.e., atoms delocalization) play an important role in the thermodynamics of water around the LLCP (from the MD simulations of qTIP4P/F water,$$^3$$ ${}^{3}$ MPa,$$P_c = 203 \pm 4$$ ${P}_{c}=203\pm 4$ K, and$$T_c = 175 \pm 2$$ ${T}_{c}=175\pm 2$ g/cm$$\rho _c = 1.03 \pm 0.01$$ ${\rho}_{c}=1.03\pm 0.01$ ). Overall, our results strongly support the LLPT scenario to explain water anomalous behavior, independently of the fundamental differences between classical MD and PIMD techniques. The reported values of$$^3$$ ${}^{3}$ for D$$T_c$$ ${T}_{c}$ O and, particularly, H$$_2$$ ${}_{2}$ O suggest that improved water models are needed for the study of supercooled water.$$_2$$ ${}_{2}$ 
A<sc>bstract</sc> Results are presented from a search for the Higgs boson decay H
→ Zγ, where Z→ ℓ ^{+}ℓ ^{−}withℓ = e or μ. The search is performed using a sample of protonproton (pp) collision data at a centerofmass energy of 13 TeV, recorded by the CMS experiment at the LHC, corresponding to an integrated luminosity of 138 fb^{−1}. Events are assigned to mutually exclusive categories, which exploit differences in both event topology and kinematics of distinct Higgs production mechanisms to enhance signal sensitivity. The signal strengthμ , defined as the product of the cross section and the branching fraction relative to the standard model prediction, is extracted from a simultaneous fit to the$$ \left[\sigma \left(\textrm{pp}\to \textrm{H}\right)\mathcal{B}\left(\textrm{H}\to \textrm{Z}\upgamma \right)\right] $$ $\left(\sigma \left(\mathrm{pp}\to H\right)B\left(H\to \mathrm{Z\gamma}\right)\right)$ℓ ^{+}ℓ ^{−}γ invariant mass distributions in all categories and is measured to beμ = 2. 4 ± 0. 9 for a Higgs boson mass of 125.38 GeV. The statistical significance of the observed excess of events is 2.7 standard deviations. This measurement corresponds to pb. The observed (expected) upper limit at 95% confidence level on$$ \left[\sigma \left(\textrm{pp}\to \textrm{H}\right)\mathcal{B}\left(\textrm{H}\to \textrm{Z}\upgamma \right)\right]=0.21\pm 0.08 $$ $\left(\sigma \left(\mathrm{pp}\to H\right)B\left(H\to \mathrm{Z\gamma}\right)\right)=0.21\pm 0.08$μ is 4.1 (1.8), where the expected limit is calculated under the backgroundonly hypothesis. The ratio of branching fractions is measured to be$$ \mathcal{B}\left(\textrm{H}\to \textrm{Z}\upgamma \right)/\mathcal{B}\left(\textrm{H}\to \upgamma \upgamma \right) $$ $B\left(H\to \mathrm{Z\gamma}\right)/B\left(H\to \mathrm{\gamma \gamma}\right)$ , which agrees with the standard model prediction of 0$$ {1.5}_{0.6}^{+0.7} $$ ${1.5}_{0.6}^{+0.7}$. 69 ± 0. 04 at the 1.5 standard deviation level. 
Abstract We present the first unquenched latticeQCD calculation of the form factors for the decay
at nonzero recoil. Our analysis includes 15 MILC ensembles with$$B\rightarrow D^*\ell \nu $$ $B\to {D}^{\ast}\ell \nu $ flavors of asqtad sea quarks, with a strange quark mass close to its physical mass. The lattice spacings range from$$N_f=2+1$$ ${N}_{f}=2+1$ fm down to 0.045 fm, while the ratio between the light and the strangequark masses ranges from 0.05 to 0.4. The valence$$a\approx 0.15$$ $a\approx 0.15$b andc quarks are treated using the Wilsonclover action with the Fermilab interpretation, whereas the light sector employs asqtad staggered fermions. We extrapolate our results to the physical point in the continuum limit using rooted staggered heavylight meson chiral perturbation theory. Then we apply a modelindependent parametrization to extend the form factors to the full kinematic range. With this parametrization we perform a joint latticeQCD/experiment fit using several experimental datasets to determine the CKM matrix element . We obtain$$V_{cb}$$ ${V}_{\mathrm{cb}}$ . The first error is theoretical, the second comes from experiment and the last one includes electromagnetic and electroweak uncertainties, with an overall$$\left V_{cb}\right = (38.40 \pm 0.68_{\text {th}} \pm 0.34_{\text {exp}} \pm 0.18_{\text {EM}})\times 10^{3}$$ $\left({V}_{\mathrm{cb}}\right)=(38.40\pm 0.{68}_{\text{th}}\pm 0.{34}_{\text{exp}}\pm 0.{18}_{\text{EM}})\times {10}^{3}$ , which illustrates the tensions between the experimental data sets, and between theory and experiment. This result is in agreement with previous exclusive determinations, but the tension with the inclusive determination remains. Finally, we integrate the differential decay rate obtained solely from lattice data to predict$$\chi ^2\text {/dof} = 126/84$$ ${\chi}^{2}\text{/dof}=126/84$ , which confirms the current tension between theory and experiment.$$R(D^*) = 0.265 \pm 0.013$$ $R\left({D}^{\ast}\right)=0.265\pm 0.013$ 
Abstract We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gatelevel error with probability close to one. We model noise by adding a pair of weak, unital, singlequbit noise channels after each twoqubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution
and the corresponding noiseless output distribution$$p_{\text {noisy}}$$ ${p}_{\text{noisy}}$ shrink exponentially with the expected number of gatelevel errors. Specifically, the linear crossentropy benchmark$$p_{\text {ideal}}$$ ${p}_{\text{ideal}}$F that measures this correlation behaves as , where$$F=\text {exp}(2s\epsilon \pm O(s\epsilon ^2))$$ $F=\text{exp}(2s\u03f5\pm O\left(s{\u03f5}^{2}\right))$ is the probability of error per circuit location and$$\epsilon $$ $\u03f5$s is the number of twoqubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution and the uniform distribution$$p_{\text {noisy}}$$ ${p}_{\text{noisy}}$ decays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {unif}}$$ ${p}_{\text{unif}}$ . In other words, although at least one local error occurs with probability$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1F)p_{\text {unif}}$$ ${p}_{\text{noisy}}\approx F{p}_{\text{ideal}}+(1F){p}_{\text{unif}}$ , the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$1F$$ $1F$ . Thus, the “whitenoise approximation” is meaningful when$$O(F\epsilon \sqrt{s})$$ $O\left(F\u03f5\sqrt{s}\right)$ , a quadratically weaker condition than the$$\epsilon \sqrt{s} \ll 1$$ $\u03f5\sqrt{s}\ll 1$ requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$\epsilon s\ll 1$$ $\u03f5s\ll 1$ , which corresponds to only$$s \ge \Omega (n\log (n))$$ $s\ge \Omega (nlog(n\left)\right)$logarithmic depth circuits, and if, additionally, the inverse error rate satisfies , which is needed to ensure errors are scrambled faster than$$\epsilon ^{1} \ge {\tilde{\Omega }}(n)$$ ${\u03f5}^{1}\ge \stackrel{~}{\Omega}\left(n\right)$F decays. The whitenoise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexitytheoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from secondmoment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds. 
Abstract Negative correlations in the sequential evolution of interspike intervals (ISIs) are a signature of memory in neuronal spiketrains. They provide coding benefits including firingrate stabilization, improved detectability of weak sensory signals, and enhanced transmission of information by improving signaltonoise ratio. Primary electrosensory afferent spiketrains in weakly electric fish fall into two categories based on the pattern of ISI correlations: nonbursting units have negative correlations which remain negative but decay to zero with increasing lags (Type I ISI correlations), and bursting units have oscillatory (alternating sign) correlation which damp to zero with increasing lags (Type II ISI correlations). Here, we predict and match observed ISI correlations in these afferents using a stochastic dynamic threshold model. We determine the ISI correlation function as a function of an arbitrary discrete noise correlation function
, where$${{\,\mathrm{\mathbf {R}}\,}}_k$$ ${\phantom{\rule{0ex}{0ex}}R\phantom{\rule{0ex}{0ex}}}_{k}$k is a multiple of the mean ISI. The function permits forward and inverse calculations of the correlation function. Both types of correlation functions can be generated by adding colored noise to the spike threshold with Type I correlations generated with slow noise and Type II correlations generated with fast noise. A firstorder autoregressive (AR) process with a single parameter is sufficient to predict and accurately match both types of afferent ISI correlation functions, with the type being determined by the sign of the AR parameter. The predicted and experimentally observed correlations are in geometric progression. The theory predicts that the limiting sum of ISI correlations is yielding a perfect DCblock in the power spectrum of the spike train. Observed ISI correlations from afferents have a limiting sum that is slightly larger at$$0.5$$ $0.5$ ($$0.475 \pm 0.04$$ $0.475\pm 0.04$ ). We conclude that the underlying process for generating ISIs may be a simple combination of loworder AR and moving average processes and discuss the results from the perspective of optimal coding.$$\text {mean} \pm \text {s.d.}$$ $\text{mean}\pm \text{s.d.}$