skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: High-Dimensional Uncertainty Quantification via Tensor Regression with Rank Determination and Adaptive Sampling
Fabrication process variations can significantly influence the performance and yield of nano-scale electronic and photonic circuits. Stochastic spectral methods have achieved great success in quantifying the impact of process variations, but they suffer from the curse of dimensionality. Recently, low-rank tensor methods have been developed to mitigate this issue, but two fundamental challenges remain open: how to automatically determine the tensor rank and how to adaptively pick the informative simulation samples. This paper proposes a novel tensor regression method to address these two challenges. We use a ℓq/ℓ2 group-sparsity regularization to determine the tensor rank. The resulting optimization problem can be efficiently solved via an alternating minimization solver. We also propose a two-stage adaptive sampling method to reduce the simulation cost. Our method considers both exploration and exploitation via the estimated Voronoi cell volume and nonlinearity measurement respectively. The proposed model is verified with synthetic and some realistic circuit benchmarks, on which our method can well capture the uncertainty caused by 19 to 100 random variables with only 100 to 600 simulation samples.  more » « less
Award ID(s):
1763699
PAR ID:
10282883
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IEEE Transactions on Components, Packaging and Manufacturing Technology
ISSN:
2156-3950
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Uncertainty quantification based on stochastic spectral methods suffers from the curse of dimensionality. This issue was mitigated recently by low-rank tensor methods. However, there exist two fundamental challenges in low-rank tensor-based uncertainty quantification: how to automatically determine the tensor rank and how to pick the simulation samples. This paper proposes a novel tensor regression method to address these two challenges. Our method uses an 12,p-norm regularization to determine the tensor rank and an estimated Voronoi diagram to pick informative samples for simulation. The proposed framework is verified by a 19-dim phonics bandpass filter and a 57-dim CMOS ring oscillator, capturing the high-dimensional uncertainty well with only 90 and 290 samples respectively. 
    more » « less
  2. A major challenge in many machine learning tasks is that the model expressive power depends on model size. Low-rank tensor methods are an efficient tool for handling the curse of dimensionality in many large-scale machine learning models. The major challenges in training a tensor learning model include how to process the high-volume data, how to determine the tensor rank automatically, and how to estimate the uncertainty of the results. While existing tensor learning focuses on a specific task, this paper proposes a generic Bayesian framework that can be employed to solve a broad class of tensor learning problems such as tensor completion, tensor regression, and tensorized neural networks. We develop a low-rank tensor prior for automatic rank determination in nonlinear problems. Our method is implemented with both stochastic gradient Hamiltonian Monte Carlo (SGHMC) and Stein Variational Gradient Descent (SVGD). We compare the automatic rank determination and uncertainty quantification of these two solvers. We demonstrate that our proposed method can determine the tensor rank automatically and can quantify the uncertainty of the obtained results. We validate our framework on tensor completion tasks and tensorized neural network training tasks. 
    more » « less
  3. Consider a multi-aspect tensor dataset which is only observed in multiple complementary aggregated versions, each one at a lower resolution than the highest available one. Recent work [2] has demonstrated that given two such tensors, which have been aggregated in lower resolutions in complementary dimensions, we can pose and solve the disaggregation as an instance of a coupled tensor decomposition. In this work, we are exploring the scenario in which, in addition to the two complementary aggregated views, we also have access to a graph where nodes correspond to samples of the tensor mode that has not been aggregated. Given this graph, we propose a graph-assisted tensor disaggregation method. In our experimental evaluation,we demonstrate that our proposed method performs on par with the state of the art when the rank of the underlying coupled tensor decomposition is low, and significantly outperforms the state of the art in cases where the rank increases, producing more robust and higher-quality disaggregation. 
    more » « less
  4. Tensor decomposition is an effective approach to compress over-parameterized neural networks and to enable their deployment on resource-constrained hardware platforms. However, directly applying tensor compression in the training process is a challenging task due to the difficulty of choosing a proper tensor rank. In order to address this challenge, this paper proposes a low-rank Bayesian tensorized neural network. Our Bayesian method performs automatic model compression via an adaptive tensor rank determination. We also present approaches for posterior density calculation and maximum a posteriori (MAP) estimation for the end-to-end training of our tensorized neural network. We provide experimental validation on a two-layer fully connected neural network, a 6-layer CNN and a 110-layer residual neural network where our work produces 7.4X to 137X more compact neural networks directly from the training while achieving high prediction accuracy. 
    more » « less
  5. Tensor dimensionality reduction is one of the fundamental tools for modern data science. To address the high computational overhead, fiber-wise sampled subtensors that preserve the original tensor rank are often used in designing efficient and scalable tensor dimensionality reduction. However, the theory of property inheritance for subtensors is still underdevelopment, that is, how the essential properties of the original tensor will be passed to its subtensors. This paper theoretically studies the property inheritance of the two key tensor properties, namely incoherence and condition number, under the tensor train setting. We also show how tensor train rank is preserved through fiber-wise sampling. The key parameters introduced in theorems are numerically evaluated under various settings. The results show that the properties of interest can be well preserved to the subtensors formed via fiber-wise sampling. Overall, this paper provides several handy analytic tools for developing efficient tensor analysis methods. 
    more » « less