skip to main content


Title: Multi-step ahead predictive model for blood glucose concentrations of type-1 diabetic patients
Abstract

Continuous monitoring of blood glucose (BG) levels is a key aspect of diabetes management. Patients with Type-1 diabetes (T1D) require an effective tool to monitor these levels in order to make appropriate decisions regarding insulin administration and food intake to keep BG levels in target range. Effectively and accurately predicting future BG levels at multi-time steps ahead benefits a patient with diabetes by helping them decrease the risks of extremes in BG including hypo- and hyperglycemia. In this study, we present a novel multi-component deep learning model that predicts the BG levels in a multi-step look ahead fashion. The model is evaluated both quantitatively and qualitatively on actual blood glucose data for 97 patients. For the prediction horizon (PH) of 30 mins, the average values forroot mean squared error(RMSE),mean absolute error(MAE),mean absolute percentage error(MAPE), andnormalized mean squared error(NRMSE) are$$23.22 \pm 6.39$$23.22±6.39mg/dL, 16.77 ± 4.87 mg/dL,$$12.84 \pm 3.68$$12.84±3.68and$$0.08 \pm 0.01$$0.08±0.01respectively. When Clarke and Parkes error grid analyses were performed comparing predicted BG with actual BG, the results showed average percentage of points in Zone A of$$80.17 \pm 9.20$$80.17±9.20and$$84.81 \pm 6.11,$$84.81±6.11,respectively. We offer this tool as a mechanism to enhance the predictive capabilities of algorithms for patients with T1D.

 
more » « less
Award ID(s):
1910539
NSF-PAR ID:
10360998
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Reports
Volume:
11
Issue:
1
ISSN:
2045-2322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    We perform path-integral molecular dynamics (PIMD), ring-polymer MD (RPMD), and classical MD simulations of H$$_2$$2O and D$$_2$$2O using the q-TIP4P/F water model over a wide range of temperatures and pressures. The density$$\rho (T)$$ρ(T), isothermal compressibility$$\kappa _T(T)$$κT(T), and self-diffusion coefficientsD(T) of H$$_2$$2O and D$$_2$$2O are in excellent agreement with available experimental data; the isobaric heat capacity$$C_P(T)$$CP(T)obtained from PIMD and MD simulations agree qualitatively well with the experiments. Some of these thermodynamic properties exhibit anomalous maxima upon isobaric cooling, consistent with recent experiments and with the possibility that H$$_2$$2O and D$$_2$$2O exhibit a liquid-liquid critical point (LLCP) at low temperatures and positive pressures. The data from PIMD/MD for H$$_2$$2O and D$$_2$$2O can be fitted remarkably well using the Two-State-Equation-of-State (TSEOS). Using the TSEOS, we estimate that the LLCP for q-TIP4P/F H$$_2$$2O, from PIMD simulations, is located at$$P_c = 167 \pm 9$$Pc=167±9 MPa,$$T_c = 159 \pm 6$$Tc=159±6 K, and$$\rho _c = 1.02 \pm 0.01$$ρc=1.02±0.01 g/cm$$^3$$3. Isotope substitution effects are important; the LLCP location in q-TIP4P/F D$$_2$$2O is estimated to be$$P_c = 176 \pm 4$$Pc=176±4 MPa,$$T_c = 177 \pm 2$$Tc=177±2 K, and$$\rho _c = 1.13 \pm 0.01$$ρc=1.13±0.01 g/cm$$^3$$3. Interestingly, for the water model studied, differences in the LLCP location from PIMD and MD simulations suggest that nuclear quantum effects (i.e., atoms delocalization) play an important role in the thermodynamics of water around the LLCP (from the MD simulations of q-TIP4P/F water,$$P_c = 203 \pm 4$$Pc=203±4 MPa,$$T_c = 175 \pm 2$$Tc=175±2 K, and$$\rho _c = 1.03 \pm 0.01$$ρc=1.03±0.01 g/cm$$^3$$3). Overall, our results strongly support the LLPT scenario to explain water anomalous behavior, independently of the fundamental differences between classical MD and PIMD techniques. The reported values of$$T_c$$Tcfor D$$_2$$2O and, particularly, H$$_2$$2O suggest that improved water models are needed for the study of supercooled water.

     
    more » « less
  2. A<sc>bstract</sc>

    Results are presented from a search for the Higgs boson decay HZγ, where Z→ ℓ+with= e or μ. The search is performed using a sample of proton-proton (pp) collision data at a center-of-mass energy of 13 TeV, recorded by the CMS experiment at the LHC, corresponding to an integrated luminosity of 138 fb1. Events are assigned to mutually exclusive categories, which exploit differences in both event topology and kinematics of distinct Higgs production mechanisms to enhance signal sensitivity. The signal strengthμ, defined as the product of the cross section and the branching fraction$$ \left[\sigma \left(\textrm{pp}\to \textrm{H}\right)\mathcal{B}\left(\textrm{H}\to \textrm{Z}\upgamma \right)\right] $$σppHBHrelative to the standard model prediction, is extracted from a simultaneous fit to the+γ invariant mass distributions in all categories and is measured to beμ= 2.4 ± 0.9 for a Higgs boson mass of 125.38 GeV. The statistical significance of the observed excess of events is 2.7 standard deviations. This measurement corresponds to$$ \left[\sigma \left(\textrm{pp}\to \textrm{H}\right)\mathcal{B}\left(\textrm{H}\to \textrm{Z}\upgamma \right)\right]=0.21\pm 0.08 $$σppHBH=0.21±0.08pb. The observed (expected) upper limit at 95% confidence level onμis 4.1 (1.8), where the expected limit is calculated under the background-only hypothesis. The ratio of branching fractions$$ \mathcal{B}\left(\textrm{H}\to \textrm{Z}\upgamma \right)/\mathcal{B}\left(\textrm{H}\to \upgamma \upgamma \right) $$BH/BHγγis measured to be$$ {1.5}_{-0.6}^{+0.7} $$1.50.6+0.7, which agrees with the standard model prediction of 0.69 ± 0.04 at the 1.5 standard deviation level.

     
    more » « less
  3. Abstract

    We present the first unquenched lattice-QCD calculation of the form factors for the decay$$B\rightarrow D^*\ell \nu $$BDνat nonzero recoil. Our analysis includes 15 MILC ensembles with$$N_f=2+1$$Nf=2+1flavors of asqtad sea quarks, with a strange quark mass close to its physical mass. The lattice spacings range from$$a\approx 0.15$$a0.15fm down to 0.045 fm, while the ratio between the light- and the strange-quark masses ranges from 0.05 to 0.4. The valencebandcquarks are treated using the Wilson-clover action with the Fermilab interpretation, whereas the light sector employs asqtad staggered fermions. We extrapolate our results to the physical point in the continuum limit using rooted staggered heavy-light meson chiral perturbation theory. Then we apply a model-independent parametrization to extend the form factors to the full kinematic range. With this parametrization we perform a joint lattice-QCD/experiment fit using several experimental datasets to determine the CKM matrix element$$|V_{cb}|$$|Vcb|. We obtain$$\left| V_{cb}\right| = (38.40 \pm 0.68_{\text {th}} \pm 0.34_{\text {exp}} \pm 0.18_{\text {EM}})\times 10^{-3}$$Vcb=(38.40±0.68th±0.34exp±0.18EM)×10-3. The first error is theoretical, the second comes from experiment and the last one includes electromagnetic and electroweak uncertainties, with an overall$$\chi ^2\text {/dof} = 126/84$$χ2/dof=126/84, which illustrates the tensions between the experimental data sets, and between theory and experiment. This result is in agreement with previous exclusive determinations, but the tension with the inclusive determination remains. Finally, we integrate the differential decay rate obtained solely from lattice data to predict$$R(D^*) = 0.265 \pm 0.013$$R(D)=0.265±0.013, which confirms the current tension between theory and experiment.

     
    more » « less
  4. Abstract

    We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the corresponding noiseless output distribution$$p_{\text {ideal}}$$pidealshrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmarkFthat measures this correlation behaves as$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$F=exp(-2sϵ±O(sϵ2)), where$$\epsilon $$ϵis the probability of error per circuit location andsis the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the uniform distribution$$p_{\text {unif}}$$punifdecays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$pnoisyFpideal+(1-F)punif. In other words, although at least one local error occurs with probability$$1-F$$1-F, the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$O(F\epsilon \sqrt{s})$$O(Fϵs). Thus, the “white-noise approximation” is meaningful when$$\epsilon \sqrt{s} \ll 1$$ϵs1, a quadratically weaker condition than the$$\epsilon s\ll 1$$ϵs1requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$s \ge \Omega (n\log (n))$$sΩ(nlog(n)), which corresponds to onlylogarithmic depthcircuits, and if, additionally, the inverse error rate satisfies$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ϵ-1Ω~(n), which is needed to ensure errors are scrambled faster thanFdecays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds.

     
    more » « less
  5. Abstract

    Negative correlations in the sequential evolution of interspike intervals (ISIs) are a signature of memory in neuronal spike-trains. They provide coding benefits including firing-rate stabilization, improved detectability of weak sensory signals, and enhanced transmission of information by improving signal-to-noise ratio. Primary electrosensory afferent spike-trains in weakly electric fish fall into two categories based on the pattern of ISI correlations: non-bursting units have negative correlations which remain negative but decay to zero with increasing lags (Type I ISI correlations), and bursting units have oscillatory (alternating sign) correlation which damp to zero with increasing lags (Type II ISI correlations). Here, we predict and match observed ISI correlations in these afferents using a stochastic dynamic threshold model. We determine the ISI correlation function as a function of an arbitrary discrete noise correlation function$${{\,\mathrm{\mathbf {R}}\,}}_k$$Rk, wherekis a multiple of the mean ISI. The function permits forward and inverse calculations of the correlation function. Both types of correlation functions can be generated by adding colored noise to the spike threshold with Type I correlations generated with slow noise and Type II correlations generated with fast noise. A first-order autoregressive (AR) process with a single parameter is sufficient to predict and accurately match both types of afferent ISI correlation functions, with the type being determined by the sign of the AR parameter. The predicted and experimentally observed correlations are in geometric progression. The theory predicts that the limiting sum of ISI correlations is$$-0.5$$-0.5yielding a perfect DC-block in the power spectrum of the spike train. Observed ISI correlations from afferents have a limiting sum that is slightly larger at$$-0.475 \pm 0.04$$-0.475±0.04($$\text {mean} \pm \text {s.d.}$$mean±s.d.). We conclude that the underlying process for generating ISIs may be a simple combination of low-order AR and moving average processes and discuss the results from the perspective of optimal coding.

     
    more » « less