Title: Moderate deviations inequalities for Gaussian process regression
Abstract Gaussian process regression is widely used to model an unknown function on a continuous domain by interpolating a discrete set of observed design points. We develop a theoretical framework for proving new moderate deviations inequalities on different types of error probabilities that arise in GP regression. Two specific examples of broad interest are the probability of falsely ordering pairs of points (incorrectly estimating one point as being better than another) and the tail probability of the estimation error at an arbitrary point. Our inequalities connect these probabilities to the mesh norm, which measures how well the design points fill the space. more »« less
Haley, James; Farguell Caus, Angel; Kochanski, Adam K.; Schranz, Sher; Mandel, Jan
(, Advances in Forest Fire Research 2018)
Viegas, Domingos Xavier
(Ed.)
Data likelihood of fire detection is the probability of the observed detection outcome given the state of the fire spread model. We derive fire detection likelihood of satellite data as a function of the fire arrival time on the model grid. The data likelihood is constructed by a combination of the burn model, the logistic regression of the active fires detections, and the Gaussian distribution of the geolocation error. The use of the data likelihood is then demonstrated by an estimation of the ignition point of a wildland fire by the maximization of the likelihood of MODIS and VIIRS data over multiple possible ignition points.
Stewart, Jonathan R
(, Journal of Multivariate Analysis)
One of the first steps in applications of statistical network analysis is frequently to produce summary charts of important features of the network. Many of these features take the form of sequences of graph statistics counting the number of realized events in the network, examples of which are degree distributions, edgewise shared partner distributions, and more. We provide conditions under which the empirical distributions of sequences of graph statistics are consistent in the L-infinity-norm in settings where edges in the network are dependent. We accomplish this task by deriving concentration inequalities that bound probabilities of deviations of graph statistics from the expected value under weak dependence conditions. We apply our concentration inequalities to empirical distributions of sequences of graph statistics and derive non-asymptotic bounds on the L-infinity-error which hold with high probability. Our non-asymptotic results are then extended to demonstrate uniform convergence almost surely in selected examples. We illustrate theoretical results through examples, simulation studies, and an application.
Wang, David Z.; Gauthier, Aidan Q.; Siegmund, Ashley E.; Hunt, Katharine L.
(, Physical Chemistry Chemical Physics)
null
(Ed.)
This work provides quantitative tests of the extent of violation of two inequalities applicable to qubits coupled into Bell states, using IBM's publicly accessible quantum computers. Violations of the inequalities are well established. Our purpose is not to test the inequalities, but rather to determine how well quantum mechanical predictions can be reproduced on quantum computers, given their current fault rates. We present results for the spin projections of two entangled qubits, along three axes A , B , and C , with a fixed angle θ between A and B and a range of angles θ ′ between B and C . For any classical object that can be characterized by three observables with two possible values, inequalities govern relationships among the probabilities of outcomes for the observables, taken pairwise. From set theory, these inequalities must be satisfied by all such classical objects; but quantum systems may violate the inequalities. We have detected clear-cut violations of one inequality in runs on IBM's publicly accessible quantum computers. The Clauser–Horne–Shimony–Holt (CHSH) inequality governs a linear combination S of expectation values of products of spin projections, taken pairwise. Finding S > 2 rules out local, hidden variable theories for entangled quantum systems. We obtained values of S greater than 2 in our runs prior to error mitigation. To reduce the quantitative errors, we used a modification of the error-mitigation procedure in the IBM documentation. We prepared a pair of qubits in the state |00〉, found the probabilities to observe the states |00〉, |01〉, |10〉, and |11〉 in multiple runs, and used that information to construct the first column of an error matrix M . We repeated this procedure for states prepared as |01〉, |10〉, and |11〉 to construct the full matrix M , whose inverse is the filtering matrix. After applying filtering matrices to our averaged outcomes, we have found good quantitative agreement between the quantum computer output and the quantum mechanical predictions for the extent of violation of both inequalities as functions of θ ′.
Abstract If four people with Gaussian‐distributed heights stand at Gaussian positions on the plane, the probability that there are exactly two people whose height is above the average of the four is exactly the same as the probability that they stand in convex position; both probabilities are . We show that this is a special case of a more general phenomenon: The problem of determining the position of the mean among the order statistics of Gaussian random points on the real line (Youden's demon problem) is the same as a natural generalization of Sylvester's four point problem to Gaussian points in . Our main tool is the observation that the Gale dual of independent samples in itself can be taken to be a set of independent points (translated to have barycenter at the origin) when the distribution of the points is Gaussian.
Kiatsupaibul, Seksan; Smith, Robert L; and Zabinsky, Zelda B.
(, Operations research)
Optimizing the performance of complex systems modeled by stochastic computer simulations is a challenging task, partly because of the lack of structural properties (e.g., convexity). This challenge is magnified by the presence of random error whereby an adaptive algorithm searching for better designs can at times mistakenly accept an inferior design. In contrast to performing multiple simulations at a design point to estimate the performance of the design, we propose a framework for adaptive search algorithms that executes a single simulation for each design point encountered. Here the estimation errors are reduced by averaging the performances from previously evaluated designs drawn from a shrinking ball around the current design point. We show under mild regularity conditions for continuous design spaces that the accumulated errors, although dependent, form a martingale process, and hence, by the strong law of large numbers for martingales, the average errors converge to zero as the algorithm proceeds. This class of algorithms is shown to converge to a global optimum with probability one. By employing a shrinking ball approach with single observations, an adaptive search algorithm can simultaneously improve the estimates of performance while exploring new and potentially better design points. Numerical experiments offer empirical support for this paradigm of single observation simulation optimization.
Li, Jialin, and Ryzhov, Ilya O. Moderate deviations inequalities for Gaussian process regression. Retrieved from https://par.nsf.gov/biblio/10435005. Journal of Applied Probability . Web. doi:10.1017/jpr.2023.30.
Li, Jialin, & Ryzhov, Ilya O. Moderate deviations inequalities for Gaussian process regression. Journal of Applied Probability, (). Retrieved from https://par.nsf.gov/biblio/10435005. https://doi.org/10.1017/jpr.2023.30
@article{osti_10435005,
place = {Country unknown/Code not available},
title = {Moderate deviations inequalities for Gaussian process regression},
url = {https://par.nsf.gov/biblio/10435005},
DOI = {10.1017/jpr.2023.30},
abstractNote = {Abstract Gaussian process regression is widely used to model an unknown function on a continuous domain by interpolating a discrete set of observed design points. We develop a theoretical framework for proving new moderate deviations inequalities on different types of error probabilities that arise in GP regression. Two specific examples of broad interest are the probability of falsely ordering pairs of points (incorrectly estimating one point as being better than another) and the tail probability of the estimation error at an arbitrary point. Our inequalities connect these probabilities to the mesh norm, which measures how well the design points fill the space.},
journal = {Journal of Applied Probability},
author = {Li, Jialin and Ryzhov, Ilya O.},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.