Abstract In this paper, we propose a novel Local Macroscopic Conservative (LoMaC) low rank tensor method for simulating the Vlasov-Poisson (VP) system. The LoMaC property refers to the exact local conservation of macroscopic mass, momentum and energy at the discrete level. This is a follow-up work of our previous development of a conservative low rank tensor approach for Vlasov dynamics (arXiv:2201.10397). In that work, we applied a low rank tensor method with a conservative singular value decomposition to the high dimensional VP system to mitigate the curse of dimensionality, while maintaining the local conservation of mass and momentum. However, energy conservation is not guaranteed, which is a critical property to avoid unphysical plasma self-heating or cooling. The new ingredient in the LoMaC low rank tensor algorithm is that we simultaneously evolve the macroscopic conservation laws of mass, momentum and energy using a flux-difference form with kinetic flux vector splitting; then the LoMaC property is realized by projecting the low rank kinetic solution onto a subspace that shares the same macroscopic observables by a conservative orthogonal projection. The algorithm is extended to the high dimensional problems by hierarchical Tuck decomposition of solution tensors and a corresponding conservative projection algorithm. Extensive numerical tests on the VP system are showcased for the algorithm’s efficacy.
more »
« less
A Conservative Low Rank Tensor Method for the Vlasov Dynamics
In this paper, we propose a conservative low rank tensor method to approximate nonlinear Vlasov solutions. The low rank approach is based on our earlier work [W. Guo and J.-M. Qiu, A Low Rank Tensor Representation of Linear Transport and Nonlinear Vlasov Solutions and Their Associated Flow Maps, preprint, https://arxiv.org/abs/2106.08834, 2021]. It takes advantage of the fact that the differential operators in the Vlasov equation are tensor friendly, based on which we propose to dynamically and adaptively build up low rank solution basis by adding new basis functions from discretization of the differential equation, and removing basis from a singular value decomposition (SVD)-type truncation procedure. For the discretization, we adopt a high order finite difference spatial discretization together with a second order strong stability preserving multistep time discretization. While the SVD truncation will remove the redundancy in representing the high dimensional Vlasov solution, it will destroy the conservation properties of the associated full conservative scheme. In this paper, we develop a conservative truncation procedure with conservation of mass, momentum, and kinetic energy densities. The conservative truncation is achieved by an orthogonal projection onto a subspace spanned by 1, 𝑣, and 𝑣2 in the velocity space associated with a weighted inner product. Then the algorithm performs a weighted SVD truncation of the remainder, which involves a scaling, followed by the standard SVD truncation and rescaling back. The algorithm is further developed in high dimensions with hierarchical Tucker tensor decomposition of high dimensional Vlasov solutions, overcoming the curse of dimensionality. An extensive set of nonlinear Vlasov examples are performed to show the effectiveness and conservation property of proposed conservative low rank approach. Comparison is performed against the nonconservative low rank tensor approach on conservation history of mass, momentum, and energy.
more »
« less
- PAR ID:
- 10524242
- Publisher / Repository:
- SIAM
- Date Published:
- Journal Name:
- SIAM Journal on Scientific Computing
- Volume:
- 46
- Issue:
- 1
- ISSN:
- 1064-8275
- Page Range / eLocation ID:
- A232 to A263
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
DeepTensor is a computationally efficient framework for low-rank decomposition of matrices and tensors using deep generative networks. We decompose a tensor as the product of low-rank tensor factors where each low-rank tensor is generated by a deep network (DN) that is trained in a self-supervised manner to minimize the mean-square approximation error. Our key observation is that the implicit regularization inherent in DNs enables them to capture nonlinear signal structures that are out of the reach of classical linear methods like the singular value decomposition (SVD) and principal components analysis (PCA). We demonstrate that the performance of DeepTensor is robust to a wide range of distributions and a computationally efficient drop-in replacement for the SVD, PCA, nonnegative matrix factorization (NMF), and similar decompositions by exploring a range of real-world applications, including hyperspectral image denoising, 3D MRI tomography, and image classification.more » « less
-
Abstract This paper introduces a general framework of Semi-parametric TEnsor Factor Analysis (STEFA) that focuses on the methodology and theory of low-rank tensor decomposition with auxiliary covariates. Semi-parametric TEnsor Factor Analysis models extend tensor factor models by incorporating auxiliary covariates in the loading matrices. We propose an algorithm of iteratively projected singular value decomposition (IP-SVD) for the semi-parametric estimation. It iteratively projects tensor data onto the linear space spanned by the basis functions of covariates and applies singular value decomposition on matricized tensors over each mode. We establish the convergence rates of the loading matrices and the core tensor factor. The theoretical results only require a sub-exponential noise distribution, which is weaker than the assumption of sub-Gaussian tail of noise in the literature. Compared with the Tucker decomposition, IP-SVD yields more accurate estimators with a faster convergence rate. Besides estimation, we propose several prediction methods with new covariates based on the STEFA model. On both synthetic and real tensor data, we demonstrate the efficacy of the STEFA model and the IP-SVD algorithm on both the estimation and prediction tasks.more » « less
-
Tucker decomposition is a low-rank tensor approximation that generalizes a truncated matrix singular value decomposition (SVD). Existing parallel software has shown that Tucker decomposition is particularly effective at compressing terabyte-sized multidimensional scientific simulation datasets, computing reduced representations that satisfy a specified approximation error. The general approach is to get a low-rank approximation of the input data by performing a sequence of matrix SVDs of tensor unfoldings, which tend to be short-fat matrices. In the existing approach, the SVD is performed by computing the eigendecomposition of the Gram matrix of the unfolding. This method sacrifices some numerical stability in exchange for lower computation costs and easier parallelization. We propose using a more numerically stable though more computationally expensive way to compute the SVD by pre- processing with a QR decomposition step and computing an SVD of only the small triangular factor. The more numerically stable approach allows us to achieve the same accuracy with half the working precision (for example, single rather than double precision). We demonstrate that our method scales as well as the existing approach, and the use of lower precision leads to an overall reduction in running time of up to a factor of 2 when using 10s to 1000s of processors. Using the same working precision, we are also able to compute Tucker decompositions with much smaller approximation error.more » « less
-
With the advent of machine learning and its overarching pervasiveness it is imperative to devise ways to represent large datasets efficiently while distilling intrinsic features necessary for subsequent analysis. The primary workhorse used in data dimensionality reduction and feature extraction has been the matrix singular value decomposition (SVD), which presupposes that data have been arranged in matrix format. A primary goal in this study is to show that high-dimensional datasets are more compressible when treated as tensors (i.e., multiway arrays) and compressed via tensor-SVDs under the tensor-tensor product constructs and its generalizations. We begin by proving Eckart–Young optimality results for families of tensor-SVDs under two different truncation strategies. Since such optimality properties can be proven in both matrix and tensor-based algebras, a fundamental question arises: Does the tensor construct subsume the matrix construct in terms of representation efficiency? The answer is positive, as proven by showing that a tensor-tensor representation of an equal dimensional spanning space can be superior to its matrix counterpart. We then use these optimality results to investigate how the compressed representation provided by the truncated tensor SVD is related both theoretically and empirically to its two closest tensor-based analogs, the truncated high-order SVD and the truncated tensor-train SVD.more » « less