skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on June 7, 2026

Title: Minimal pole representation for spectral functions
Representing spectral densities, real-frequency, and real-time Green’s functions of continuous systems by a small discrete set of complex poles is a ubiquitous problem in condensed matter physics, with applications ranging from quantum transport simulations to the simulation of strongly correlated electron systems. This paper introduces a method for obtaining a compact, approximate representation of these functions, based on their parameterization on the real axis and a given approximate precision. We show applications to typical spectral functions and results for structured and unstructured correlation functions of model systems.  more » « less
Award ID(s):
2310182
PAR ID:
10616931
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
American Institute of Physics
Date Published:
Journal Name:
The Journal of Chemical Physics
Volume:
162
Issue:
21
ISSN:
0021-9606
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract We consider uniqueness problems for meromorphic inner functions on the upper half-plane. In these problems, we consider spectral data depending partially or fully on the spectrum, derivative values at the spectrum, Clark measure, or the spectrum of the negative of a meromorphic inner function. Moreover, we consider applications of these uniqueness results to inverse spectral theory of canonical Hamiltonian systems and obtain generalizations of the Borg-Levinson two-spectra theorem for canonical Hamiltonian systems and unique determination of a Hamiltonian from its spectral measure under some conditions. 
    more » « less
  2. Ruiz, Francisco; Dy, Jennifer; van de Meent, Jan-Willem (Ed.)
    Influence diagnostics such as influence functions and approximate maximum influence perturbations are popular in machine learning and in AI domain applications. Influence diagnostics are powerful statistical tools to identify influential datapoints or subsets of datapoints. We establish finite-sample statistical bounds, as well as computational complexity bounds, for influence functions and approximate maximum influence perturbations using efficient inverse-Hessian-vector product implementations. We illustrate our results with generalized linear models and large attention based models on synthetic and real data. 
    more » « less
  3. Abstract Koopman operators linearize nonlinear dynamical systems, making their spectral information of crucial interest. Numerous algorithms have been developed to approximate these spectral properties, and dynamic mode decomposition (DMD) stands out as the poster child of projection-based methods. Although the Koopman operator itself is linear, the fact that it acts in an infinite-dimensional space of observables poses challenges. These include spurious modes, essential spectra, and the verification of Koopman mode decompositions. While recent work has addressed these challenges for deterministic systems, there remains a notable gap in verified DMD methods for stochastic systems, where the Koopman operator measures the expectation of observables. We show that it is necessary to go beyond expectations to address these issues. By incorporating variance into the Koopman framework, we address these challenges. Through an additional DMD-type matrix, we approximate the sum of a squared residual and a variance term, each of which can be approximated individually using batched snapshot data. This allows verified computation of the spectral properties of stochastic Koopman operators, controlling the projection error. We also introduce the concept of variance-pseudospectra to gauge statistical coherency. Finally, we present a suite of convergence results for the spectral information of stochastic Koopman operators. Our study concludes with practical applications using both simulated and experimental data. In neural recordings from awake mice, we demonstrate how variance-pseudospectra can reveal physiologically significant information unavailable to standard expectation-based dynamical models. 
    more » « less
  4. The radio frequency spectral shaper is an essential component in emerging multi-service mobile communications, multiband satellite and radar systems, and future 5G/6G radio frequency systems for equalizing spectral unevenness, removing out-of-band noise and interference, and manipulating multi-band signal simultaneously. While it is easy to achieve simple spectral functions using either conventional microwave photonic filters or the optical spectrum to microwave spectra mapping techniques, it is challenging to enable complex spectral shaping functions over tens of GHz bandwidth as well as to achieve point-by-point shaping capability to fulfill the needs in dynamic wireless communications. In this paper, we proposed and demonstrated a novel spectral shaping system, which utilizes a two-section algorithm to automatically decompose the target RF response into a series of Gaussian functions and to reconstruct the desired RF response by microwave photonic techniques. The devised spectral shaping system is capable of manipulating the spectral function in various bands (S, C, and X) simultaneously with step resolution of as fine as tens of MHz. The resolution limitation in optical spectral processing is mitigated using the discrete convolution technique. Over 10 dynamic and independently adjustable spectral control points are experimentally achieved based on the proposed spectral shaper. 
    more » « less
  5. Deep neural networks have revolutionized many real world applications, due to their flexibility in data fitting and accurate predictions for unseen data. A line of research reveals that neural networks can approximate certain classes of functions with an arbitrary accuracy, while the size of the network scales exponentially with respect to the data dimension. Empirical results, however, suggest that networks of moderate size already yield appealing performance. To explain such a gap, a common belief is that many data sets exhibit low dimensional structures, and can be modeled as samples near a low dimensional manifold. In this paper, we prove that neural networks can efficiently approximate functions supported on low dimensional manifolds. The network size scales exponentially in the approximation error, with an exponent depending on the intrinsic dimension of the data and the smoothness of the function. Our result shows that exploiting low dimensional data structures can greatly enhance the efficiency in function approximation by neural networks. We also implement a sub-network that assigns input data to their corresponding local neighborhoods, which may be of independent interest. 
    more » « less