Abstract Fourier analysis is gaining popularity in image synthesis as a tool for the analysis of error in Monte Carlo (MC) integration. Still, existing tools are only able to analyse convergence under simplifying assumptions (such as randomized shifts) which are not applied in practice during rendering. We reformulate the expressions for bias and variance of sampling‐based integrators to unify non‐uniform sample distributions [importance sampling (IS)] as well as correlations between samples while respecting finite sampling domains. Our unified formulation hints at fundamental limitations of Fourier‐based tools in performing variance analysis for MC integration. At the same time, it reveals that, when combined with correlated sampling, IS can impact convergence rate by introducing or inhibiting discontinuities in the integrand. We demonstrate that the convergence of multiple importance sampling (MIS) is determined by the strategy which converges slowest and propose several simple approaches to overcome this limitation. We show that smoothing light boundaries (as commonly done in production to reduce variance) can improve (M)IS convergence (at a cost of introducing a small amount of bias) since it removesC0discontinuities within the integration domain. We also propose practical integrand‐ and sample‐mirroring approaches which cancel the impact of boundary discontinuities on the convergence rate of estimators.
more »
« less
Gaussian Product Sampling for Rendering Layered Materials
Abstract To increase diversity and realism, surface bidirectional scattering distribution functions (BSDFs) are often modelled as consisting of multiple layers, but accurately evaluating layered BSDFs while accounting for all light transport paths is a challenging problem. Recently, Guoet al. [GHZ18] proposed an accurate and general position‐free Monte Carlo method, but this method introduces variance that leads to longer render time compared to non‐stochastic layered models. We improve the previous work by presenting two new sampling strategies,pair‐product samplingandmultiple‐product sampling. Our new methods better take advantage of the layered structure and reduce variance compared to the conventional approach of sequentially sampling one BSDF at a time. Ourpair‐product samplingstrategy importance samples the product of two BSDFs from a pair of adjacent layers. We further generalize this tomultiple‐product sampling, which importance samples the product of a chain of three or more BSDFs. In order to compute these products, we developed a new approximate Gaussian representation of individual layer BSDFs. This representation incorporates spatially varying material properties as parameters so that our techniques can support an arbitrary number of textured layers. Compared to previous Monte Carlo layering approaches, our results demonstrate substantial variance reduction in rendering isotropic layered surfaces.
more »
« less
- PAR ID:
- 10136771
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Computer Graphics Forum
- Volume:
- 39
- Issue:
- 1
- ISSN:
- 0167-7055
- Page Range / eLocation ID:
- p. 420-435
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Meila, Marina; Zhang, Tong (Ed.)Black-box variational inference algorithms use stochastic sampling to analyze diverse statistical models, like those expressed in probabilistic programming languages, without model-specific derivations. While the popular score-function estimator computes unbiased gradient estimates, its variance is often unacceptably large, especially in models with discrete latent variables. We propose a stochastic natural gradient estimator that is as broadly applicable and unbiased, but improves efficiency by exploiting the curvature of the variational bound, and provably reduces variance by marginalizing discrete latent variables. Our marginalized stochastic natural gradients have intriguing connections to classic coordinate ascent variational inference, but allow parallel updates of variational parameters, and provide superior convergence guarantees relative to naive Monte Carlo approximations. We integrate our method with the probabilistic programming language Pyro and evaluate real-world models of documents, images, networks, and crowd-sourcing. Compared to score-function estimators, we require far fewer Monte Carlo samples and consistently convergence orders of magnitude faster.more » « less
-
Abstract Quantile is an important quantity in reliability analysis, as it is related to the resistance level for defining failure events. This study develops a computationally efficient sampling method for estimating extreme quantiles using stochastic black box computer models. Importance sampling has been widely employed as a powerful variance reduction technique to reduce estimation uncertainty and improve computational efficiency in many reliability studies. However, when applied to quantile estimation, importance sampling faces challenges, because a good choice of the importance sampling density relies on information about the unknown quantile. We propose an adaptive method that refines the importance sampling density parameter toward the unknown target quantile value along the iterations. The proposed adaptive scheme allows us to use the simulation outcomes obtained in previous iterations for steering the simulation process to focus on important input areas. We prove some convergence properties of the proposed method and show that our approach can achieve variance reduction over crude Monte Carlo sampling. We demonstrate its estimation efficiency through numerical examples and wind turbine case study.more » « less
-
In computational mechanics, multiple models are often present to describe a physical system. While Bayesian model selection is a helpful tool to compare these models using measurement data, it requires the computationally expensive estimation of a multidimensional integral — known as the marginal likelihood or as the model evidence (i.e., the probability of observing the measured data given the model) — over the multidimensional parameter domain. This study presents efficient approaches for estimating this marginal likelihood by transforming it into a one-dimensional integral that is subsequently evaluated using a quadrature rule at multiple adaptively-chosen iso-likelihood contour levels. Three different algorithms are proposed to estimate the probability mass at each adapted likelihood level using samples from importance sampling, stratified sampling, and Markov chain Monte Carlo (MCMC) sampling, respectively. The proposed approach is illustrated — with comparisons to Monte Carlo, nested, and MultiNest sampling — through four numerical examples. The first, an elementary example, shows the accuracies of the three proposed algorithms when the exact value of the marginal likelihood is known. The second example uses an 11-story building subjected to an earthquake excitation with an uncertain hysteretic base isolation layer with two models to describe the isolation layer behavior. The third example considers flow past a cylinder when the inlet velocity is uncertain. Based on the these examples, the method with stratified sampling is by far the most accurate and efficient method for complex model behavior in low dimension, particularly considering that this method can be implemented to exploit parallel computation. In the fourth example, the proposed approach is applied to heat conduction in an inhomogeneous plate with uncertain thermal conductivity modeled through a 100 degree-of-freedom Karhunen–Loève expansion. The results indicate that MultiNest cannot efficiently handle the high-dimensional parameter space, whereas the proposed MCMC-based method more accurately and efficiently explores the parameter space. The marginal likelihood results for the last three examples — when compared with the results obtained from standard Monte Carlo sampling, nested sampling, and MultiNest algorithm — show good agreement.more » « less
-
null (Ed.)Electron probe microanalysis is a nondestructive technique widely used to determine the elemental composition of bulk samples. This was extended to layered specimens, with the development of appropriate software. The traditional quantification method requires the use of matrix correction procedures based upon models of the ionization depth distribution, the so-called ϕ ( ρz ) distribution. Most of these models have led to commercial quantification programs but only few of them allow the quantification of layered specimens. Therefore, we developed BadgerFilm, a free open-source thin film program available to the general public. This program implements a documented ϕ ( ρz ) model as well as algorithms to calculate fluorescence in bulk and thin film samples. Part 1 of the present work aims at describing the operation of the implemented ϕ ( ρz ) distribution model and validating its implementation against experimental measurements and Monte Carlo simulations on bulk samples. The program has the ability to predict absolute X-ray intensities that can be directly compared to Monte Carlo simulations. We demonstrate that the implemented model works very well for bulk materials. And as will be shown in Part 2, BadgerFilm predictions for thin film specimens are also shown to be in good agreements with experimental and Monte Carlo results.more » « less
An official website of the United States government
