Partial differential equations (PDEs) with spatially varying coefficients arise throughout science and engineering, modeling rich heterogeneous material behavior. Yet conventional PDE solvers struggle with the immense complexity found in nature, since they must first discretize the problem---leading to spatial aliasing, and global meshing/sampling that is costly and error-prone. We describe a method that approximates neither the domain geometry, the problem data, nor the solution space, providing the exact solution (in expectation) even for problems with extremely detailed geometry and intricate coefficients. Our main contribution is to extend thewalk on spheres (WoS)algorithm from constant- to variable-coefficient problems, by drawing on techniques from volumetric rendering. In particular, an approach inspired bynull-scatteringyields unbiased Monte Carlo estimators for a large class of 2nd order elliptic PDEs, which share many attractive features with Monte Carlo rendering: no meshing, trivial parallelism, and the ability to evaluate the solution at any point without solving a global system of equations.
more »
« less
This content will become publicly available on August 10, 2026
State of the Art in Grid-Free Monte Carlo Methods for Partial Differential Equations
This 3 hour course provides a detailed overview of grid-free Monte Carlo methods for solving partial differential equations (PDEs) based on the walk on spheres (WoS) algorithm, with a special emphasis on problems with high geometric complexity. PDEs are a basic building block of models and algorithms used throughout science, engineering and visual computing. Yet despite decades of research, conventional PDE solvers struggle to capture the immense geometric complexity of the natural world. A perennial challenge is spatial discretization: traditionally, one must partition the domain into a high-quality volumetric mesh—a process that can be brittle, memory intensive, and difficult to parallelize. WoS makes a radical departure from this approach, by reformulating the problem in terms of recursive integral equations that can be estimated using the Monte Carlo method, eliminating the need for spatial discretization. Since these equations strongly resemble those found in light transport theory, one can leverage deep knowledge from Monte Carlo rendering to develop new PDE solvers that share many of its advantages: no meshing, trivial parallelism, and the ability to evaluate the solution at any point without solving a global system of equations. The course is divided into two parts. Part I will cover the basics of using WoS to solve fundamental PDEs like the Poisson equation. Topics include formulating the solution as an integral equation, generating samples via recursive random walks, and employing accelerated distance and ray intersection queries to efficiently handle complex geometries. Participants will also gain experience setting up demo applications involving data interpolation, heat transfer, and geometric optimization using the open-source “Zombie” library, which implements various grid-free Monte Carlo PDE solvers. Part II will feature a mini-panel of academic and industry contributors covering advanced topics including variance reduction, differentiable and multi-physics simulation, and applications in industrial design and robust geometry processing.
more »
« less
- PAR ID:
- 10643523
- Publisher / Repository:
- ACM
- Date Published:
- Page Range / eLocation ID:
- 1 to 8
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper presents a method that uses neural networks as a caching mechanism to reduce the variance of Monte Carlo Partial Differential Equation solvers, such as the Walk-on-Spheres algorithm [Sawhney and Crane 2020]. While these Monte Carlo PDE solvers have the merits of being unbiased and discretization-free, their high variance often hinders real-time applications. On the other hand, neural networks can approximate the PDE solution, and evaluating these networks at inference time can be very fast. However, neural-network-based solutions may suffer from convergence difficulties and high bias. Our hybrid system aims to combine these two potentially complementary solutions by training a neural field to approximate the PDE solution using supervision from a WoS solver. This neural field is then used as a cache in the WoS solver to reduce variance during inference. We demonstrate that our neural field training procedure is better than the commonly used self-supervised objectives in the literature. We also show that our hybrid solver exhibits lower variance than WoS with the same computational budget: it is significantly better for small compute budgets and provides smaller improvements for larger budgets, reaching the same performance as WoS in the limit.more » « less
-
Grid-free Monte Carlo methods based on thewalk on spheres (WoS)algorithm solve fundamental partial differential equations (PDEs) like the Poisson equation without discretizing the problem domain or approximating functions in a finite basis. Such methods hence avoid aliasing in the solution, and evade the many challenges of mesh generation. Yet for problems with complex geometry, practical grid-free methods have been largely limited to basic Dirichlet boundary conditions. We introduce thewalk on stars (WoSt)algorithm, which solves linear elliptic PDEs with arbitrary mixed Neumann and Dirichlet boundary conditions. The key insight is that one can efficiently simulate reflecting Brownian motion (which models Neumann conditions) by replacing the balls used by WoS withstar-shapeddomains. We identify such domains via the closest point on the visibility silhouette, by simply augmenting a standard bounding volume hierarchy with normal information. Overall, WoSt is an easy modification of WoS, and retains the many attractive features of grid-free Monte Carlo methods such as progressive and view-dependent evaluation, trivial parallelization, and sublinear scaling to increasing geometric detail.more » « less
-
We introduce a Monte Carlo method for computing derivatives of the solution to a partial differential equation (PDE) with respect to problem parameters (such as domain geometry or boundary conditions). Derivatives can be evaluated at arbitrary points, without performing a global solve or constructing a volumetric grid or mesh. The method is hence well suited to inverse problems with complex geometry, such as PDE-constrained shape optimization. Like other walk on spheres (WoS) algorithms, our method is trivial to parallelize, and is agnostic to boundary representation (meshes, splines, implicit surfaces, etc.), supporting large topological changes. We focus in particular on screened Poisson equations, which model diverse problems from scientific and geometric computing. As in differentiable rendering, we jointly estimate derivatives with respect to all parameters—hence, cost does not grow significantly with parameter count. In practice, even noisy derivative estimates exhibit fast, stable convergence for stochastic gradient-based optimization, as we show through examples from thermal design, shape from diffusion, and computer graphics.more » « less
-
We introduce a Monte Carlo method for computing derivatives of the solution to a partial differential equation (PDE) with respect to problem parameters (such as domain geometry or boundary conditions). Derivatives can be evaluated at arbitrary points, without performing a global solve or constructing a volumetric grid or mesh. The method is hence well suited to inverse problems with complex geometry, such as PDE-constrained shape optimization. Like otherwalk on spheres (WoS)algorithms, our method is trivial to parallelize, and is agnostic to boundary representation (meshes, splines, implicit surfaces,etc.), supporting large topological changes. We focus in particular on screened Poisson equations, which model diverse problems from scientific and geometric computing. As in differentiable rendering, we jointly estimate derivatives with respect to all parameters---hence, cost does not grow significantly with parameter count. In practice, even noisy derivative estimates exhibit fast, stable convergence for stochastic gradient-based optimization, as we show through examples from thermal design, shape from diffusion, and computer graphics.more » « less
An official website of the United States government
