skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Taming the CFL Number for Discontinuous Galerkin Methods by Local Exponentiation
The matrix valued exponential function can be used for time-stepping numerically stiff discretization, such as the discontinuous Galerkin method but this approach is expensive as the matrix is dense and necessitates global communication. In this paper, we propose a local low-rank approximation to this matrix. The local low-rank construction is motivated by the nature of wave propagation and costs significantly less to apply than full exponentiation. The accuracy of this time stepping method is inherited from the exponential integrator and the local property of it allows parallel implementation. The method is expected to be useful in design and inverse problems where many solves of the PDE are required. We demonstrate the error convergence of the method for the one-dimensional (1D) Maxwell’s equation on a uniform grid.  more » « less
Award ID(s):
2208164 2210286 1913076
PAR ID:
10442642
Author(s) / Creator(s):
; ;
Editor(s):
Melenk, J.M.; Perugia, I.; Schöberl, J.; Schwab, C
Date Published:
Journal Name:
Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2020+1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Matrix low rank approximation is an effective method to reduce or eliminate the statistical redundancy of its components. Compared with the traditional global low rank methods such as singular value decomposition (SVD), local low rank approximation methods are more advantageous to uncover interpretable data structures when clear duality exists between the rows and columns of the matrix. Local low rank approximation is equivalent to low rank submatrix detection. Unfortunately,existing local low rank approximation methods can detect only submatrices of specific mean structure, which may miss a substantial amount of true and interesting patterns. In this work, we develop a novel matrix computational framework called RPSP (Random Probing based submatrix Propagation) that provides an effective solution for the general matrix local low rank representation problem. RPSP detects local low rank patterns that grow from small submatrices of low rank property, which are determined by a random projection approach. RPSP is supported by theories of random projection. Experiments on synthetic data demonstrate that RPSP outperforms all state-of-the-art methods, with the capacity to robustly and correctly identify the low rank matrices when the pattern has a similar mean as the background, background noise is heteroscedastic and multiple patterns present in the data. On real-world datasets, RPSP also demonstrates its effectiveness in identifying interpretable local low rank matrices. 
    more » « less
  2. Abstract Low-rank matrix models have been universally useful for numerous applications, from classical system identification to more modern matrix completion in signal processing and statistics. The nuclear norm has been employed as a convex surrogate of the low-rankness since it induces a low-rank solution to inverse problems. While the nuclear norm for low rankness has an excellent analogy with the $$\ell _1$$ norm for sparsity through the singular value decomposition, other matrix norms also induce low-rankness. Particularly as one interprets a matrix as a linear operator between Banach spaces, various tensor product norms generalize the role of the nuclear norm. We provide a tensor-norm-constrained estimator for the recovery of approximately low-rank matrices from local measurements corrupted with noise. A tensor-norm regularizer is designed to adapt to the local structure. We derive statistical analysis of the estimator over matrix completion and decentralized sketching by applying Maurey’s empirical method to tensor products of Banach spaces. The estimator provides a near-optimal error bound in a minimax sense and admits a polynomial-time algorithm for these applications. 
    more » « less
  3. Artificial Intelligence (AI) has played an important role for data-driven decision making in complex engineering problems. However, there has been a huge waste of efforts to configure AI methods (e.g., to select preprocessing and modeling methods, etc.), catering to different contexts (e.g., data analytics objectives, data distributions, etc.). In current practice, data scientists need to manually configure the AI methods in trial-and-errors according to a specific context, including determining the different options of the pipeline components and evaluating the advantages and limitations of an AI method. In this paper, we propose a Local Low-rank Response Imputation (Lori) method, which will automatically configure AI methods to specific contexts by completing a sparse context pipeline response matrix. Different from the traditional recommendation systems, Lori performs multivariate partition of the entire context-pipeline response matrix based on the principal Hessian directions of the low-rank imputed response matrix. Thus, the partitioned local low-rank response matrices can be closely modeled to automatically match the AI methods with the data sets. A small-scale and a large-scale case studies in three manufacturing processes demonstrated the merits of the proposed Lori method. 
    more » « less
  4. Low-rank matrix recovery is a fundamental problem in machine learning with numerous applications. In practice, the problem can be solved by convex optimization namely nuclear norm minimization, or by non-convex optimization as it is well-known that for low-rank matrix problems like matrix sensing and matrix completion, all local optima of the natural non-convex objectives are also globally optimal under certain ideal assumptions. In this paper, we study new approaches for matrix sensing in a semi-random model where an adversary can add any number of arbitrary sensing matrices. More precisely, the problem is to recover a low-rank matrix $$X^\star$$ from linear measurements $$b_i = \langle A_i, X^\star \rangle$$, where an unknown subset of the sensing matrices satisfies the Restricted Isometry Property (RIP) and the rest of the $$A_i$$'s are chosen adversarially. It is known that in the semi-random model, existing non-convex objectives can have bad local optima. To fix this, we present a descent-style algorithm that provably recovers the ground-truth matrix $$X^\star$$. For the closely-related problem of semi-random matrix completion, prior work [CG18] showed that all bad local optima can be eliminated by reweighting the input data. However, the analogous approach for matrix sensing requires reweighting a set of matrices to satisfy RIP, which is a condition that is NP-hard to check. Instead, we build on the framework proposed in [KLL$^+$23] for semi-random sparse linear regression, where the algorithm in each iteration reweights the input based on the current solution, and then takes a weighted gradient step that is guaranteed to work well locally. Our analysis crucially exploits the connection between sparsity in vector problems and low-rankness in matrix problems, which may have other applications in obtaining robust algorithms for sparse and low-rank problems. 
    more » « less
  5. The rank of a bimatrix game is the matrix rank of the sum of the two payoff matrices. This paper comprehensively analyzes games of rank one and shows the following: (1) For a game of rank r, the set of its Nash equilibria is the intersection of a generically one-dimensional set of equilibria of parameterized games of rank r − 1 with a hyperplane. (2) One equilibrium of a rank-1 game can be found in polynomial time. (3) All equilibria of a rank-1 game can be found by following a piecewise linear path. In contrast, such a path-following method finds only one equilibrium of a bimatrix game. (4) The number of equilibria of a rank-1 game may be exponential. (5) There is a homeomorphism between the space of bimatrix games and their equilibrium correspondence that preserves rank. It is a variation of the homeomorphism used for the concept of strategic stability of an equilibrium component. 
    more » « less