skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.


Title: Forecasting Gas Usage for Big Buildings Using Generalized Additive Models and Deep Learning
Time series behavior of gas consumption is highly irregular, non-stationary, and volatile due to its dependency on the weather, users' habits and lifestyle. This complicates the modeling and forecasting of gas consumption with most of the existing time series modeling techniques, specifically when missing values and outliers are present. To demonstrate and overcome these problems, we investigate two approaches to model the gas consumption, namely Generalized Additive Models (GAM) and Long Short-Term Memory (LSTM). We perform our evaluations on two building datasets from two different continents. We present each selected feature's influence, the tuning parameters, and the characteristics of the gas consumption on their forecasting abilities. We compare the performances of GAM and LSTM with other state-of-the-art forecasting approaches. We show that LSTM outperforms GAM and other existing approaches, however, GAM provides better interpretable results for building management systems (BMS).  more » « less
Award ID(s):
1544687
PAR ID:
10073257
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2018 IEEE International Conference on Smart Computing (SMARTCOMP)
Page Range / eLocation ID:
203 to 210
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Deep neural network models, especially Long Short Term Memory (LSTM), have shown great success in analyzing Electronic Health Records (EHRs) due to their ability to capture temporal dependencies in time series data. When applying the deep learning models to EHRs, we are generally confronted with two major challenges: high rate of missingness and time irregularity. Motivated by the original PACIFIER framework which utilized matrix decomposition for data imputation, we applied and further extended it by including three components: forecasting future events, a time-aware mechanism, and a subgroup basis approach. We evaluated the proposed framework with real-world EHRs which consists of 52,919 visits and 4,224,567 events on a task of early prediction of septic shock. We compared our work against multiple baselines including the original PACIFIER using both LSTM and Time-aware LSTM (T-LSTM). Experimental results showed that our proposed framework significantly outperformed all competitive baseline approaches. More importantly, the extracted interpretative latent patterns from subgroups could shed some lights for clinicians to discover the progression of septic shock patients. 
    more » « less
  2. Forecasting time series data is an important subject in economics, business, and finance. Traditionally, there are several techniques to effectively forecast the next lag of time series data such as univariate Autoregressive (AR), univariate Moving Average (MA), Simple Exponential Smoothing (SES), and more notably Autoregressive Integrated Moving Average (ARIMA) with its many variations. In particular, ARIMA model has demonstrated its outperformance in precision and accuracy of predicting the next lags of time series. With the recent advancement in computational power of computers and more importantly development of more advanced machine learning algorithms and approaches such as deep learning, new algorithms are developed to analyze and forecast time series data. The research question investigated in this article is that whether and how the newly developed deep learning-based algorithms for forecasting time series data, such as “Long Short-Term Memory (LSTM)”, are superior to the traditional algorithms. The empirical studies conducted and reported in this article show that deep learning-based algorithms such as LSTM outperform traditional-based algorithms such as ARIMA model. More specifically, the average reduction in error rates obtained by LSTM was between 84 - 87 percent when compared to ARIMA indicating the superiority of LSTM to ARIMA. Furthermore, it was noticed that the number of training times, known as “epoch” in deep learning, had no effect on the performance of the trained forecast model and it exhibited a truly random behavior. 
    more » « less
  3. null (Ed.)
    Predicting workload behavior during execution is essential for dynamic resource optimization of processor systems. Early studies used simple prediction algorithms such as a history tables. More recently, researchers have applied advanced machine learning regression techniques. Workload prediction can be cast as a time series forecasting problem. Time series forecasting is an active research area with recent advances that have not been studied in the context of workload prediction. In this paper, we first perform a comparative study of representative time series forecasting techniques to predict the dynamic workload of applications running on a CPU. We adapt state-of-the-art matrix profile and dynamic linear models (DLMs) not previously applied to workload prediction and compare them against traditional SVM and LSTM models that have been popular for handling non-stationary data. We find that all time series forecasting models struggle to predict abrupt workload changes. These changes occur because workloads go through phases, where prior work has studied workload phase detection, classification and prediction. We propose a novel approach that combines time series forecasting with phase prediction. We process each phase as a separate time series and train one forecasting model per phase. At runtime, forecasts from phase-specific models are selected and combined based on the predicted phase behavior. We apply our approach to forecasting of SPEC workloads running on a state-of-the-art Intel machine. Our results show that an LSTM-based phase-aware predictor can forecast workload CPI with less than 8% mean absolute error while reducing CPI error by more than 12% on average compared to a non-phase-aware approach. 
    more » « less
  4. NA (Ed.)
    Deep learning models have achieved remarkable accuracy for structural response modeling. However, these models heavily depend on having a sufficient amount of training data, which can be challenging and time-consuming to collect. Moreover, data-driven models sometimes struggle to adhere to physics constraints. Therefore, in this study, a physics-informed long short-term memory (PI-LSTM) network was applied to structural response modeling by incorporating physics constraints into deep learning. The physics constraints were modified to accommodate the characteristics of both linear and nonlinear cases. The PI-LSTM network, inspired by and compared with existing physics-informed deep learning models (PhyCNN and PhyLSTM), was validated using the numerical simulation results of the single-degree-of-freedom (SDOF) system and the experimental results of the six-story building. Additionally, the PI-LSTM network underwent thorough investigation and validation across the four cases of the SDOF system and numerical simulation results of the six-story building with the comparison of the regular LSTM. The results indicate that the PI-LSTM network outperformed the regular LSTM models in terms of accuracy. Furthermore, the PI-LSTM network exhibited a more concentrated and higher accuracy range when analyzing the results of both the SDOF system and the six-story building. These findings demonstrate that the PI-LSTM network presents a reliable and efficient approach for structural response modeling. 
    more » « less
  5. Many coastal cities are facing frequent flooding from storm events that are made worse by sea level rise and climate change. The groundwater table level in these low relief coastal cities is an important, but often overlooked, factor in the recurrent flooding these locations face. Infiltration of stormwater and water intrusion due to tidal forcing can cause already shallow groundwater tables to quickly rise toward the land surface. This decreases available storage which increases runoff, stormwater system loads, and flooding. Groundwater table forecasts, which could help inform the modeling and management of coastal flooding, are generally unavailable. This study explores two machine learning models, Long Short-term Memory (LSTM) networks and Recurrent Neural Networks (RNN), to model and forecast groundwater table response to storm events in the flood prone coastal city of Norfolk, Virginia. To determine the effect of training data type on model accuracy, two types of datasets (i) the continuous time series and (ii) a dataset of only storm events, created from observed groundwater table, rainfall, and sea level data from 2010–2018 are used to train and test the models. Additionally, a real-time groundwater table forecasting scenario was carried out to compare the models’ abilities to predict groundwater table levels given forecast rainfall and sea level as input data. When modeling the groundwater table with observed data, LSTM networks were found to have more predictive skill than RNNs (root mean squared error (RMSE) of 0.09 m versus 0.14 m, respectively). The real-time forecast scenario showed that models trained only on storm event data outperformed models trained on the continuous time series data (RMSE of 0.07 m versus 0.66 m, respectively) and that LSTM outperformed RNN models. Because models trained with the continuous time series data had much higher RMSE values, they were not suitable for predicting the groundwater table in the real-time scenario when using forecast input data. These results demonstrate the first use of LSTM networks to create hourly forecasts of groundwater table in a coastal city and show they are well suited for creating operational forecasts in real-time. As groundwater table levels increase due to sea level rise, forecasts of groundwater table will become an increasingly valuable part of coastal flood modeling and management. 
    more » « less