skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Stable Determination of Time-Dependent Collision Kernel in the Nonlinear Boltzmann Equation
We consider an inverse problem for the nonlinear Boltzmann equation with a time-dependent kernel in dimensions n \geq 2. We establish a logarithm-type stability result for the collision kernel from measurements under certain additional conditions. A uniqueness result is derived as an immediate consequence of the stability result. Our approach relies on second-order linearization and multivariate finite differences, as well as the stability of the light-ray transform.  more » « less
Award ID(s):
2006731 2306221
PAR ID:
10555238
Author(s) / Creator(s):
;
Publisher / Repository:
SIAM Journal on Applied Mathematics
Date Published:
Journal Name:
SIAM Journal on Applied Mathematics
Volume:
84
Issue:
5
ISSN:
0036-1399
Page Range / eLocation ID:
1937 to 1956
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Many supervised learning problems involve high-dimensional data such as images, text, or graphs. In order to make efficient use of data, it is often useful to leverage certain geometric priors in the problem at hand, such as invariance to translations, permutation subgroups, or stability to small deformations. We study the sample complexity of learning problems where the target function presents such invariance and stability properties, by considering spherical harmonic decompositions of such functions on the sphere. We provide non-parametric rates of convergence for kernel methods, and show improvements in sample complexity by a factor equal to the size of the group when using an invariant kernel over the group, compared to the corresponding non-invariant kernel. These improvements are valid when the sample size is large enough, with an asymptotic behavior that depends on spectral properties of the group. Finally, these gains are extended beyond invariance groups to also cover geometric stability to small deformations, modeled here as subsets (not necessarily subgroups) of permutations. 
    more » « less
  2. Many supervised learning problems involve high-dimensional data such as images, text, or graphs. In order to make efficient use of data, it is often useful to leverage certain geometric priors in the problem at hand, such as invariance to translations, permutation subgroups, or stability to small deformations. We study the sample complexity of learning problems where the target function presents such invariance and stability properties, by considering spherical harmonic decompositions of such functions on the sphere. We provide non-parametric rates of convergence for kernel methods, and show improvements in sample complexity by a factor equal to the size of the group when using an invariant kernel over the group, compared to the corresponding non-invariant kernel. These improvements are valid when the sample size is large enough, with an asymptotic behavior that depends on spectral properties of the group. Finally, these gains are extended beyond invariance groups to also cover geometric stability to small deformations, modeled here as subsets (not necessarily subgroups) of permutations. 
    more » « less
  3. We study integral operators on the space of square-integrable functions from a compact set, X, to a separableHilbert space,H. The kernel of such an operator takes values in the ideal of Hilbert–Schmidt operators on H.We establish regularity conditions on the kernel under which the associated integral operator is trace class. First, we extend Mercer’s theorem to operator-valued kernels by proving that a continuous, nonnegative-definite, Hermitian symmetric kernel defines a trace class integral operator on L2(X; H) under an additional assumption. Second, we show that a general operator-valued kernel that is defined on a compact set and that is Hölder continuous with Hölder exponent greater than a half is trace class provided that the operator-valued kernel is essentially bounded as a mapping into the space of trace class operators on H. Finally, when dim H < ∞, we show that an analogous result also holds for matrix-valued kernels on the real line, provided that an additional exponential decay assumption holds. 
    more » « less
  4. We present Neural Kernel Fields: a novel method for reconstructing implicit 3D shapes based on a learned kernel ridge regression. Our technique achieves state-of-the-art results when reconstructing 3D objects and large scenes from sparse oriented points, and can reconstruct shape categories outside the training set with almost no drop in accuracy. The core insight of our approach is that kernel methods are extremely effective for reconstructing shapes when the chosen kernel has an appropriate inductive bias. We thus factor the problem of shape reconstruction into two parts: (1) a backbone neural network which learns kernel parameters from data, and (2) a kernel ridge regression that fits the input points on-the-fly by solving a simple positive definite linear system using the learned kernel. As a result of this factorization, our reconstruction gains the benefits of datadriven methods under sparse point density while maintaining interpolatory behavior, which converges to the ground truth shape as input sampling density increases. Our experiments demonstrate a strong generalization capability to objects outside the train-set category and scanned scenes. Source code and pretrained models are available at https:// nv-tlabs.github.io/nkf. 
    more » « less
  5. Random feature maps are used to decrease the computational cost of kernel machines in large-scale problems. The Mondrian kernel is one such example of a fast random feature approximation of the Laplace kernel, generated by a computationally efficient hierarchical random partition of the input space known as the Mondrian process. In this work, we study a variation of this random feature map by applying a uniform random rotation to the input space before running the Mondrian process to approximate a kernel that is invariant under rotations. We obtain a closed-form expression for the isotropic kernel that is approximated, as well as a uniform convergence rate of the uniformly rotated Mondrian kernel to this limit. To this end, we utilize techniques from the theory of stationary random tessellations in stochastic geometry and prove a new result on the geometry of the typical cell of the superposition of uniformly rotated Mondrian tessellations. Finally, we test the empirical performance of this random feature map on both synthetic and real-world datasets, demonstrating its improved performance over the Mondrian kernel on a dataset that is debiased from the standard coordinate axes. 
    more » « less