skip to main content


Title: Stacked LSTM based deep recurrent neural network with kalman smoothing for blood glucose prediction
Abstract Background Blood glucose (BG) management is crucial for type-1 diabetes patients resulting in the necessity of reliable artificial pancreas or insulin infusion systems. In recent years, deep learning techniques have been utilized for a more accurate BG level prediction system. However, continuous glucose monitoring (CGM) readings are susceptible to sensor errors. As a result, inaccurate CGM readings would affect BG prediction and make it unreliable, even if the most optimal machine learning model is used. Methods In this work, we propose a novel approach to predicting blood glucose level with a stacked Long short-term memory (LSTM) based deep recurrent neural network (RNN) model considering sensor fault. We use the Kalman smoothing technique for the correction of the inaccurate CGM readings due to sensor error. Results For the OhioT1DM (2018) dataset, containing eight weeks’ data from six different patients, we achieve an average RMSE of 6.45 and 17.24 mg/dl for 30 min and 60 min of prediction horizon (PH), respectively. Conclusions To the best of our knowledge, this is the leading average prediction accuracy for the ohioT1DM dataset. Different physiological information, e.g., Kalman smoothed CGM data, carbohydrates from the meal, bolus insulin, and cumulative step counts in a fixed time interval, are crafted to represent meaningful features used as input to the model. The goal of our approach is to lower the difference between the predicted CGM values and the fingerstick blood glucose readings—the ground truth. Our results indicate that the proposed approach is feasible for more reliable BG forecasting that might improve the performance of the artificial pancreas and insulin infusion system for T1D diabetes management.  more » « less
Award ID(s):
1946231
NSF-PAR ID:
10319318
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
BMC Medical Informatics and Decision Making
Volume:
21
Issue:
1
ISSN:
1472-6947
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we provide an approach to data-driven control for artificial pancreas systems by learning neural network models of human insulin-glucose physiology from available patient data and using a mixed integer optimization approach to control blood glucose levels in real-time using the inferred models. First, our approach learns neural networks to predict the future blood glucose values from given data on insulin infusion and their resulting effects on blood glucose levels. However, to provide guarantees on the resulting model, we use quantile regression to fit multiple neural networks that predict upper and lower quantiles of the future blood glucose levels, in addition to the mean. Using the inferred set of neural networks, we formulate a model-predictive control scheme that adjusts both basal and bolus insulin delivery to ensure that the risk of harmful hypoglycemia and hyperglycemia are bounded using the quantile models while the mean prediction stays as close as possible to the desired target. We discuss how this scheme can handle disturbances from large unannounced meals as well as infeasibilities that result from situations where the uncertainties in future glucose predictions are too high. We experimentally evaluate this approach on data obtained from a set of 17 patients over a course of 40 nights per patient. Furthermore, we also test our approach using neural networks obtained from virtual patient models available through the UVA-Padova simulator for type-1 diabetes. 
    more » « less
  2. The management of blood glucose levels is critical in the care of Type 1 diabetes subjects. In extremes, high or low levels of blood glucose are fatal. To avoid such adverse events, there is the development and adoption of wearable technologies that continuously monitor blood glucose and administer insulin. This technology allows subjects to easily track their blood glucose levels with early intervention without the need for hospital visits. The data collected from these sensors is an excellent candidate for the application of machine learning algorithms to learn patterns and predict future values of blood glucose levels. In this study, we developed artificial neural network algorithms based on the OhioT1DM training dataset that contains data on 12 subjects. The dataset contains features such as subject identifiers, continuous glucose monitoring data obtained in 5 minutes intervals, insulin infusion rate, etc. We developed individual models, including LSTM, BiLSTM, Convolutional LSTMs, TCN, and sequence-to-sequence models. We also developed transfer learning models based on the most important features of the data, as identified by a gradient boosting algorithm. These models were evaluated on the OhioT1DM test dataset that contains 6 unique subject’s data. The model with the lowest RMSE values for the 30- and 60-minutes was selected as the best performing model. Our result shows that sequence-to-sequence BiLSTM performed better than the other models. This work demonstrates the potential of artificial neural networks algorithms in the management of Type 1 diabetes. 
    more » « less
  3. The problem of real time prediction of blood glucose (BG) levels based on the readings from a continuous glucose monitoring (CGM) device is a problem of great importance in diabetes care, and therefore, has attracted a lot of research in recent years, especially based on machine learning. An accurate prediction with a 30, 60, or 90 min prediction horizon has the potential of saving millions of dollars in emergency care costs. In this paper, we treat the problem as one of function approximation, where the value of the BG level at time t + h (where h the prediction horizon) is considered to be an unknown function of d readings prior to the time t . This unknown function may be supported in particular on some unknown submanifold of the d -dimensional Euclidean space. While manifold learning is classically done in a semi-supervised setting, where the entire data has to be known in advance, we use recent ideas to achieve an accurate function approximation in a supervised setting; i.e., construct a model for the target function. We use the state-of-the-art clinically relevant PRED-EGA grid to evaluate our results, and demonstrate that for a real life dataset, our method performs better than a standard deep network, especially in hypoglycemic and hyperglycemic regimes. One noteworthy aspect of this work is that the training data and test data may come from different distributions. 
    more » « less
  4. Neural networks present a useful framework for learning complex dynamics, and are increasingly being considered as components to closed loop predictive control algorithms. However, if they are to be utilized in such safety-critical advisory settings, they must be provably "conformant" to the governing scientific (biological, chemical, physical) laws which underlie the modeled process. Unfortunately, this is not easily guaranteed as neural network models are prone to learn patterns which are artifacts of the conditions under which the training data is collected, which may not necessarily conform to underlying physiological laws. In this work, we utilize a formal range-propagation based approach for checking whether neural network models for predicting future blood glucose levels of individuals with type-1 diabetes are monotonic in terms of their insulin inputs. These networks are increasingly part of closed loop predictive control algorithms for "artificial pancreas" devices which automate control of insulin delivery for individuals with type-1 diabetes. Our approach considers a key property that blood glucose levels must be monotonically decreasing with increasing insulin inputs to the model. Multiple representative neural network models for blood glucose prediction are trained and tested on real patient data, and conformance is tested through our verification approach. We observe that standard approaches to training networks result in models which violate the core relationship between insulin inputs and glucose levels, despite having high prediction accuracy. We propose an approach that can learn conformant models without much loss in accuracy. 
    more » « less
  5. Abstract

    Continuous monitoring of blood glucose (BG) levels is a key aspect of diabetes management. Patients with Type-1 diabetes (T1D) require an effective tool to monitor these levels in order to make appropriate decisions regarding insulin administration and food intake to keep BG levels in target range. Effectively and accurately predicting future BG levels at multi-time steps ahead benefits a patient with diabetes by helping them decrease the risks of extremes in BG including hypo- and hyperglycemia. In this study, we present a novel multi-component deep learning model that predicts the BG levels in a multi-step look ahead fashion. The model is evaluated both quantitatively and qualitatively on actual blood glucose data for 97 patients. For the prediction horizon (PH) of 30 mins, the average values forroot mean squared error(RMSE),mean absolute error(MAE),mean absolute percentage error(MAPE), andnormalized mean squared error(NRMSE) are$$23.22 \pm 6.39$$23.22±6.39mg/dL, 16.77 ± 4.87 mg/dL,$$12.84 \pm 3.68$$12.84±3.68and$$0.08 \pm 0.01$$0.08±0.01respectively. When Clarke and Parkes error grid analyses were performed comparing predicted BG with actual BG, the results showed average percentage of points in Zone A of$$80.17 \pm 9.20$$80.17±9.20and$$84.81 \pm 6.11,$$84.81±6.11,respectively. We offer this tool as a mechanism to enhance the predictive capabilities of algorithms for patients with T1D.

     
    more » « less