Embedding properties of network realizations of dissipative reduced order models
Jörn Zimmerling, Mikhail Zaslavsky,Rob Remis, Shasri Moskow, Alexander Mamonov, Murthy Guddati,
Vladimir Druskin, and Liliana Borcea
Mathematical Sciences Department, Worcester Polytechnic Institute
https://www.wpi.edu/people/vdruskin
Abstract
Realizations of reduced order models of passive SISO or MIMO LTI problems can be transformed to tridiagonal and
blocktridiagonal forms, respectively, via dierent modications of the Lanczos algorithm. Generally, such realizations
can be interpreted as ladder resistorcapacitorinductor (RCL) networks. They gave rise to network syntheses in the
rst half of the 20th century that was at the base of modern electronics design and consecutively to MOR that
tremendously impacted many areas of engineering (electrical, mechanical, aerospace, etc.) by enabling ecient
compression of the underlining dynamical systems. In his seminal 1950s works Krein realized that in addition to
their compressing properties, network realizations can be used to embed the data back into the state space of the
underlying continuum problems.
In more recent works of the authors Krein's ideas gave rise to socalled nitedierence Gaussian quadrature rules
(FDGQR), allowing to approximately map the ROM statespace representation to its full order continuum counterpart
on a judicially chosen grid. Thus, the state variables can be accessed directly from the transfer function without
solving the full problem and even explicit knowledge of the PDE coecients in the interior, i.e., the FDGQR directly
learns" the problem from its transfer function. This embedding property found applications in PDE solvers, inverse
problems and unsupervised machine learning.
Here we show a generalization of this approach to dissipative PDE problems, e.g., electromagnetic and acoustic
wave propagation in lossy dispersive media. Potential applications include solution of inverse scattering problems in
dispersive media, such as seismic exploration, radars and sonars.
To x the idea, we consider a passive irreducible SISO ROM
fn(s) = Xn
j=1
yi
s + σj
, (62)
assuming that all complex terms in (62) come in conjugate pairs.
We will seek ladder realization of (62) as
rjuj + vj − vj−1 = −shˆjuj ,
uj+1 − uj + ˆrj vj = −shj vj ,
(63)
for j = 0, . . . , n with boundary conditions
un+1 = 0, v1 = −1,
and 4n real parameters hi, hˆi, ri and rˆi, i = 1, . . . , n, that can be considered, respectively, as the equivalent discrete
inductances, capacitors and also primary and dual conductors. Alternatively, they can be viewed as respectively
masses, spring stiness, primary and dual dampers of a mechanical string. Reordering variables would bring (63)
into tridiagonal form, so from the spectral measure given by (62 ) the coecients of (63) can be obtained via a
nonsymmetric Lanczos algorithm written in Jsymmetric form and fn(s) can be equivalently computed as
fn(s) = u1.
The cases considered in the original FDGQR correspond to either (i) real y, θ or (ii) real y and imaginary θ. Both
cases are covered by the Stieltjes theorem, that yields in case (i) real positive h, hˆ and trivial r, rˆ, and in case (ii) real
positive h,r and trivial hˆ,rˆ. This result allowed us a simple interpretation of (62) as the staggered nitedierence
approximation of the underlying PDE problem [2]. For PDEs in more than one variables (including topologically rich
datamanifolds), a nitedierence interpretation is obtained via a MIMO extensions in block form, e.g., [4, 3].
The main diculty of extending this approach to general passive problems is that the Stieltjes theory is no longer
applicable. Moreover, the tridiagonal realization of a passive ROM transfer function (62) via the ladder network (63)
cannot always be obtained in portHamiltonian form, i.e., the equivalent primary and dual conductors may change
sign [1].
100
Embedding of the Stieltjes problems, e.g., the case (i) was done by mapping h and hˆ into values of acoustic (or
electromagnetic) impedance at grid cells, that required a special coordinate stretching (known as travel time coordinate transform) for continuous problems. Likewise, to circumvent possible nonpositivity of conductors for the
nonStieltjes case, we introduce an additional complex sdependent coordinate stretching, vanishing as s → ∞ [1].
This stretching applied in the discrete setting induces a diagonal factorization, removes oscillating coecients, and
leads to an accurate embedding for moderate variations of the coecients of the continuum problems, i.e., it maps
discrete coecients onto the values of their continuum counterparts.
Not only does this embedding yields an approximate linear algebraic algorithm for the solution of the inverse problems
for dissipative PDEs, it also leads to new insight into the properties of their ROM realizations. We will also discuss
another approach to embedding, based on KreinNudelman theory [5], that results in special datadriven adaptive
grids.
References
[1] Borcea, Liliana and Druskin, Vladimir and Zimmerling, Jörn, A reduced order model approach to
inverse scattering in lossy layered media, Journal of Scientic Computing, V. 89, N1, pp. 136,2021
[2] Druskin, Vladimir and Knizhnerman, Leonid, Gaussian spectral rules for the threepoint second dierences:
I. A twopoint positive denite problem in a semiinnite domain, SIAM Journal on Numerical Analysis, V. 37,
N 2, pp.403422, 1999
[3] Druskin, Vladimir and Mamonov, Alexander V and Zaslavsky, Mikhail, Distance preserving model
order reduction of graphLaplacians and cluster analysis, Druskin, Vladimir and Mamonov, Alexander V
and Zaslavsky, Mikhail, Journal of Scientic Computing, V. 90, N 1, pp 130, 2022
[4] Druskin, Vladimir and Moskow, Shari and Zaslavsky, Mikhail LippmannSchwingerLanczos algorithm
for inverse scattering problems, Inverse Problems, V. 37, N. 7, 2021,
[5] Mark Adolfovich Nudelman The Krein String and Characteristic Functions of Maximal Dissipative Operators, Journal of Mathematical Sciences, 2004, V 124, pp 49184934
Go back to Plenary Speakers Go back to Speakers Go back
more »
« less
Knowledgebra: An Algebraic Learning Framework for Knowledge Graph
Knowledge graph (KG) representation learning aims to encode entities and relations into dense continuous vector spaces such that knowledge contained in a dataset could be consistently represented. Dense embeddings trained from KG datasets benefit a variety of downstream tasks such as KG completion and link prediction. However, existing KG embedding methods fell short to provide a systematic solution for the global consistency of knowledge representation. We developed a mathematical language for KG based on an observation of their inherent algebraic structure, which we termed as Knowledgebra. By analyzing five distinct algebraic properties, we proved that the semigroup is the most reasonable algebraic structure for the relation embedding of a general knowledge graph. We implemented an instantiation model, SemE, using simple matrix semigroups, which exhibits stateoftheart performance on standard datasets. Moreover, we proposed a regularizationbased method to integrate chainlike logic rules derived from human knowledge into embedding training, which further demonstrates the power of the developed language. As far as we know, by applying abstract algebra in statistical learning, this work develops the first formal language for general knowledge graphs, and also sheds light on the problem of neuralsymbolic integration from an algebraic perspective.
more »
« less
 NSFPAR ID:
 10342594
 Date Published:
 Journal Name:
 Machine Learning and Knowledge Extraction
 Volume:
 4
 Issue:
 2
 ISSN:
 25044990
 Page Range / eLocation ID:
 432 to 445
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this


null (Ed.)Recent years have witnessed the enormous success of text representation learning in a wide range of text mining tasks. Earlier word embedding learning approaches represent words as fixed lowdimensional vectors to capture their semantics. The word embeddings so learned are used as the input features of taskspecific models. Recently, pretrained language models (PLMs), which learn universal language representations via pretraining Transformerbased neural models on largescale text corpora, have revolutionized the natural language processing (NLP) field. Such pretrained representations encode generic linguistic features that can be transferred to almost any textrelated applications. PLMs outperform previous taskspecific models in many applications as they only need to be finetuned on the target corpus instead of being trained from scratch. In this tutorial, we introduce recent advances in pretrained text embeddings and language models, as well as their applications to a wide range of text mining tasks. Specifically, we first overview a set of recently developed selfsupervised and weaklysupervised text embedding methods and pretrained language models that serve as the fundamentals for downstream tasks. We then present several new methods based on pretrained text embeddings and language models for various text mining applications such as topic discovery and text classification. We focus on methods that are weaklysupervised, domainindependent, languageagnostic, effective and scalable for mining and discovering structured knowledge from largescale text corpora. Finally, we demonstrate with real world datasets how pretrained text representations help mitigate the human annotation burden and facilitate automatic, accurate and efficient text analyses.more » « less

Time series prediction is an important problem in machine learning. Previous methods for time series prediction did not involve additional information. With a lot of dynamic knowledge graphs available, we can use this additional information to predict the time series better. Recently, there has been a focus on the application of deep representation learning on dynamic graphs. These methods predict the structure of the graph by reasoning over the interactions in the graph at previous time steps. In this paper, we propose a new framework to incorporate the information from dynamic knowledge graphs for time series prediction. We show that if the information contained in the graph and the time series data are closely related, then this interdependence can be used to predict the time series with improved accuracy. Our framework, DArtNet, learns a static embedding for every node in the graph as well as a dynamic embedding which is dependent on the dynamic attribute value (timeseries). Then it captures the information from the neighborhood by taking a relation specific mean and encodes the history information using RNN. We jointly train the model link prediction and attribute prediction. We evaluate our method on five specially curated datasets for this problem and show a consistent improvement in time series prediction results. We release the data and code of model DArtNet for future research.more » « less

null (Ed.)Despite achieving remarkable performance, deep graph learning models, such as node classification and network embedding, suffer from harassment caused by small adversarial perturbations. However, the vulnerability analysis of graph matching under adversarial attacks has not been fully investigated yet. This paper proposes an adversarial attack model with two novel attack techniques to perturb the graph structure and degrade the quality of deep graph matching: (1) a kernel density estimation approach is utilized to estimate and maximize node densities to derive imperceptible perturbations, by pushing attacked nodes to dense regions in two graphs, such that they are indistinguishable from many neighbors; and (2) a meta learningbased projected gradient descent method is developed to well choose attack starting points and to improve the search performance for producing effective perturbations. We evaluate the effectiveness of the attack model on real datasets and validate that the attacks can be transferable to other graph learning models.more » « less

Answering complex FirstOrder Logical (FOL) queries on largescale incomplete knowledge graphs (KGs) is an important yet challenging task. Recent advances embed logical queries and KG entities in the same space and conduct query answering via dense similarity search. However, most logical operators designed in previous studies do not satisfy the axiomatic system of classical logic, limiting their performance. Moreover, these logical operators are parameterized and thus require many complex FOL queries as training data, which are often arduous to collect or even inaccessible in most realworld KGs. We thus present FuzzQE, a fuzzy logic based logical query embedding framework for answering FOL queries over KGs. FuzzQE follows fuzzy logic to define logical operators in a principled and learningfree manner, where only entity and relation embeddings require learning. FuzzQE can further benefit from labeled complex logical queries for training. Extensive experiments on two benchmark datasets demonstrate that FuzzQE provides significantly better performance in answering FOL queries compared to stateoftheart methods. In addition, FuzzQE trained with only KG link prediction can achieve comparable performance to those trained with extra complex query data.more » « less