skip to main content


Title: Exploration of tensor decomposition applied to commercial building baseline estimation
Baseline estimation is a critical task for commercial buildings that participate in demand response programs and need to assess the impact of their strategies. The problem is to predict what the power profile would have been had the demand response event not taken place. This paper explores the use of tensor decomposition in baseline estimation. We apply the method to submetered fan power data from demand response experiments that were run to assess a fast demand response strategy expected to primarily impact the fans. Baselining this fan power data is critical for evaluating the results, but doing so presents new challenges not readily addressed by existing techniques designed primarily for baselining whole building electric loads. We find that tensor decomposition of the fan power data identifies components that capture both dominant daily patterns and demand response events, and that are generally more interpretable than those found by principal component analysis. We conclude by discussing how these components and related techniques can aid in developing new baseline models.  more » « less
Award ID(s):
1838179
NSF-PAR ID:
10205312
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE Global Conference on Signal and Information Processing (GlobalSIP)
Page Range / eLocation ID:
1 to 5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Energy 3D printing processes have enabled energy storage devices with complex structures, high energy density, and high power density. Among these processes, Freeze Nano Printing (FNP) has risen as a promising process. However, quality problems are among the biggest barriers for FNP. Particularly, the droplet solidification time in FNP governs thermal distribution, and subsequently determines product solidification, formation, and quality. To describe the solidification time, physical-based heat transfer model is built. But it is computationally intensive. The objective of this work is to build an efficient emulator for the physical model. There are several challenges unaddressed: 1) the solidification time at various locations, which is a multi-dimensional array response, needs to be modeled; 2) the construction and evaluation of the emulator at new process settings need to be quick and accurate. We integrate joint tensor decomposition and Nearest Neighbor Gaussian Process (NNGP) to construct an efficient multi-dimensional array response emulator with process settings as inputs. Specifically, structured joint tensor decomposition decomposes the multi-dimensional array responses at various process settings into the setting-specific core tensors and shared low dimensional factorization matrices. Then, each independent entry of the core tensor is modeled with an NNGP, which addresses the computationally intensive model estimation problem by sampling the nearest neighborhood samples. Finally, tensor reconstruction is performed to make predictions of solidification time for new process settings. The proposed framework is demonstrated by emulating the physical model of FNP, and compared with alternative tensor (multi-dimensional array) regression models. 
    more » « less
  2. Abstract

    Damping estimation from laboratory, full‐scale, or computational simulation is critical in response prediction of structures under wind, waves, or earthquake effects. A virtual dynamic shaker (VDS)‐based scheme was recently developed for system identification (SI) of structures for processing (weakly) stationary responses, that is, frequency and damping features that offers, especially the added advantage of its basic simplicity over other schemes. While the VDS has shown performance, equivalent to other popular SI schemes, it is based on the assumption of the global flatness of the load spectrum (i.e., white noise assumption) like used in most other SI schemes, which may not always be appropriate in practical applications. In addition, it is restricted to data from a single‐degree‐of‐freedom (SDOF) response (or unimodal response) to obtain accurate modal characteristics. To address these potential shortcomings, this study revisits the VDS scheme and offers an enhancement by invoking local flatness assumption (EVDS) to possibly improve the damping estimation with the assumption that the load spectrum is flat only around the natural frequencies of the desired modes. A new formulation involving the effect of the ground motion induced vertical vibrations of a building is also introduced for both the VDS and the EVDS. Extensive examples through numerical simulation and full‐scale data, including a comparison with other popular SI schemes, demonstrate the efficacy of the proposed EVDS scheme. To facilitate expeditious and convenient utilization of the proposed EVDS as well as the VDS, this study has implemented a web‐enabled framework, named VDS‐Damping, for on‐demand and on‐the‐fly applications through user‐friendly input and result interfaces. A recently developed mode decomposition scheme, state space‐based mode decomposition (SSBMD), is implemented in the framework to assist in analyzing output from multiple modes and eliminates restriction of SDOF system. Accordingly, the SSBMD can also serve as a stand‐alone mode decomposition tool to separate response in each mode. This framework enables users to estimate damping on‐the‐fly by uploading with ease their data.

     
    more » « less
  3. This paper studies how to integrate rider mode preferences into the design of on-demand multimodal transit systems (ODMTSs). It is motivated by a common worry in transit agencies that an ODMTS may be poorly designed if the latent demand, that is, new riders adopting the system, is not captured. This paper proposes a bilevel optimization model to address this challenge, in which the leader problem determines the ODMTS design, and the follower problems identify the most cost efficient and convenient route for riders under the chosen design. The leader model contains a choice model for every potential rider that determines whether the rider adopts the ODMTS given her proposed route. To solve the bilevel optimization model, the paper proposes an exact decomposition method that includes Benders optimal cuts and no-good cuts to ensure the consistency of the rider choices in the leader and follower problems. Moreover, to improve computational efficiency, the paper proposes upper and lower bounds on trip durations for the follower problems, valid inequalities that strengthen the no-good cuts, and approaches to reduce the problem size with problem-specific preprocessing techniques. The proposed method is validated using an extensive computational study on a real data set from the Ann Arbor Area Transportation Authority, the transit agency for the broader Ann Arbor and Ypsilanti region in Michigan. The study considers the impact of a number of factors, including the price of on-demand shuttles, the number of hubs, and access to transit systems criteria. The designed ODMTSs feature high adoption rates and significantly shorter trip durations compared with the existing transit system and highlight the benefits of ensuring access for low-income riders. Finally, the computational study demonstrates the efficiency of the decomposition method for the case study and the benefits of computational enhancements that improve the baseline method by several orders of magnitude. Funding: This research was partly supported by National Science Foundation [Leap HI Proposal NSF-1854684] and the Department of Energy [Research Award 7F-30154]. 
    more » « less
  4. The record-breaking performance of deep neural networks (DNNs) comes with heavy parameter budgets, which leads to external dynamic random access memory (DRAM) for storage. The prohibitive energy of DRAM accesses makes it nontrivial for DNN deployment on resource-constrained devices, calling for minimizing the movements of weights and data in order to improve the energy efficiency. Driven by this critical bottleneck, we present SmartDeal, a hardware-friendly algorithm framework to trade higher-cost memory storage/access for lower-cost computation, in order to aggressively boost the storage and energy efficiency, for both DNN inference and training. The core technique of SmartDeal is a novel DNN weight matrix decomposition framework with respective structural constraints on each matrix factor, carefully crafted to unleash the hardware-aware efficiency potential. Specifically, we decompose each weight tensor as the product of a small basis matrix and a large structurally sparse coefficient matrix whose nonzero elements are readily quantized to the power-of-2. The resulting sparse and readily quantized DNNs enjoy greatly reduced energy consumption in data movement as well as weight storage, while incurring minimal overhead to recover the original weights thanks to the required sparse bit-operations and cost-favorable computations. Beyond inference, we take another leap to embrace energy-efficient training, by introducing several customized techniques to address the unique roadblocks arising in training while preserving the SmartDeal structures. We also design a dedicated hardware accelerator to fully utilize the new weight structure to improve the real energy efficiency and latency performance. We conduct experiments on both vision and language tasks, with nine models, four datasets, and three settings (inference-only, adaptation, and fine-tuning). Our extensive results show that 1) being applied to inference, SmartDeal achieves up to 2.44x improvement in energy efficiency as evaluated using real hardware implementations and 2) being applied to training, SmartDeal can lead to 10.56x and 4.48x reduction in the storage and the training energy cost, respectively, with usually negligible accuracy loss, compared to state-of-the-art training baselines. Our source codes are available at: https://github.com/VITA-Group/SmartDeal. 
    more » « less
  5. null (Ed.)
    We give new and efficient black-box reconstruction algorithms for some classes of depth-3 arithmetic circuits. As a consequence, we obtain the first efficient algorithm for computing the tensor rank and for finding the optimal tensor decomposition as a sum of rank-one tensors when then input is a constant-rank tensor. More specifically, we provide efficient learning algorithms that run in randomized polynomial time over general fields and in deterministic polynomial time over and for the following classes: 1) Set-multilinear depth-3 circuits of constant top fan-in ((k) circuits). As a consequence of our algorithm, we obtain the first polynomial time algorithm for tensor rank computation and optimal tensor decomposition of constant-rank tensors. This result holds for d dimensional tensors for any d, but is interesting even for d=3. 2) Sums of powers of constantly many linear forms ((k) circuits). As a consequence we obtain the first polynomial-time algorithm for tensor rank computation and optimal tensor decomposition of constant-rank symmetric tensors. 3) Multilinear depth-3 circuits of constant top fan-in (multilinear (k) circuits). Our algorithm works over all fields of characteristic 0 or large enough characteristic. Prior to our work the only efficient algorithms known were over polynomially-sized finite fields (see. Karnin-Shpilka 09’). Prior to our work, the only polynomial-time or even subexponential-time algorithms known (deterministic or randomized) for subclasses of (k) circuits that also work over large/infinite fields were for the setting when the top fan-in k is at most 2 (see Sinha 16’ and Sinha 20’). 
    more » « less