In this paper, we present OpenWaters, a real-time open-source underwater simulation kit for generating photorealistic underwater scenes. OpenWaters supports creation of massive amount of underwater images by emulating diverse real-world conditions. It allows for fine controls over every variable in a simulation instance, including geometry, rendering parameters like ray-traced water caustics, scattering, and ground-truth labels. Using underwater depth (distance between camera and object) estimation as the use-case, we showcase and validate the capabilities of OpenWaters to model underwater scenes that are used to train a deep neural network for depth estimation. Our experimental evaluation demonstrates depth estimation using synthetic underwater images with high accuracy, and feasibility of transfer-learning of features from synthetic to real-world images.
more »
« less
Deep Kernel Density Estimation for Photon Mapping
Abstract Recently, deep learning‐based denoising approaches have led to dramatic improvements in low sample‐count Monte Carlo rendering. These approaches are aimed at path tracing, which is not ideal for simulating challenging light transport effects like caustics, where photon mapping is the method of choice. However, photon mapping requires very large numbers of traced photons to achieve high‐quality reconstructions. In this paper, we develop the first deep learning‐based method for particle‐based rendering, and specifically focus on photon density estimation, the core of all particle‐based methods. We train a novel deep neural network to predict a kernel function to aggregate photon contributions at shading points. Our network encodes individual photons into per‐photon features, aggregates them in the neighborhood of a shading point to construct a photon local context vector, and infers a kernel function from the per‐photon and photon local context features. This network is easy to incorporate in many previous photon mapping methods (by simply swapping the kernel density estimator) and can produce high‐quality reconstructions of complex global illumination effects like caustics with an order of magnitude fewer photons compared to previous photon mapping methods. Our approach largely reduces the required number of photons, significantly advancing the computational efficiency in photon mapping.
more »
« less
- PAR ID:
- 10173614
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Computer Graphics Forum
- Volume:
- 39
- Issue:
- 4
- ISSN:
- 0167-7055
- Page Range / eLocation ID:
- p. 35-45
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Transparent objects are a very challenging problem in computer vision. They are hard to segment or classify due to their lack of precise boundaries, and there is limited data available for training deep neural networks. As such, current solutions for this problem employ rigid synthetic datasets, which lack flexibility and lead to severe performance degradation when deployed on real-world scenarios. In particular, these synthetic datasets omit features such as refraction, dispersion and caustics due to limitations in the rendering pipeline. To address this issue, we present SuperCaustics, a real-time, open-source simulation of transparent objects designed for deep learning applications. SuperCaustics features extensive modules for stochastic environment creation; uses hardware ray-tracing to support caustics, dispersion, and refraction; and enables generating massive datasets with multi-modal, pixel-perfect ground truth annotations. To validate our proposed system, we trained a deep neural network from scratch to segment transparent objects in difficult lighting scenarios. Our neural network achieved performance comparable to the state-of-the-art on a real-world dataset using only 10% of the training data and in a fraction of the training time. Further experiments show that a model trained with SuperCaustics can segment different types of caustics, even in images with multiple overlapping transparent objects. To the best of our knowledge, this is the first such result for a model trained on synthetic data. Both our open-source code and experimental data are freely available online.more » « less
-
Abstract Typically discussed in the context of optics, caustics are envelopes of classical trajectories (rays) where the density of states diverges, resulting in pronounced observable features such as bright points, curves, and extended networks of patterns. Here, we generate caustics in the matter waves of an atom laser, providing a striking experimental example of catastrophe theory applied to atom optics in an accelerated (gravitational) reference frame. We showcase caustics formed by individual attractive and repulsive potentials, and present an example of a network generated by multiple potentials. Exploiting internal atomic states, we demonstrate fluid-flow tracing as another tool of this flexible experimental platform. The effective gravity experienced by the atoms can be tuned with magnetic gradients, forming caustics analogous to those produced by gravitational lensing. From a more applied point of view, atom optics affords perspectives for metrology, atom interferometry, and nanofabrication. Caustics in this context may lead to quantum innovations as they are an inherently robust way of manipulating matter waves.more » « less
-
Abstract Deep learning has become a widespread tool in both science and industry. However, continued progress is hampered by the rapid growth in energy costs of ever-larger deep neural networks. Optical neural networks provide a potential means to solve the energy-cost problem faced by deep learning. Here, we experimentally demonstrate an optical neural network based on optical dot products that achieves 99% accuracy on handwritten-digit classification using ~3.1 detected photons per weight multiplication and ~90% accuracy using ~0.66 photons (~2.5 × 10 −19 J of optical energy) per weight multiplication. The fundamental principle enabling our sub-photon-per-multiplication demonstration—noise reduction from the accumulation of scalar multiplications in dot-product sums—is applicable to many different optical-neural-network architectures. Our work shows that optical neural networks can achieve accurate results using extremely low optical energies.more » « less
-
Hemmer, Philip R.; Migdall, Alan L. (Ed.)Recent proposals suggest that distributed single photons serving as a ‘non-local oscillator’ can outperform coherent states as a phase reference for long-baseline interferometric imaging of weak sources [1,2]. Such nonlocal quantum states distributed between telescopes can, in-principle, surpass the limitations of conventional interferometric-based astronomical imaging approaches for very-long baselines such as: signal-to-noise, shot noise, signal loss, and faintness of the imaged objects. Here we demonstrate in a table-top experiment, interference between a nonlocal oscillator generated by equal-path splitting of an idler photon from a pulsed, separable, parametric down conversion process and a spectrally single-mode, quasi-thermal source. We compare the single-photon nonlocal oscillator to a more conventional local oscillator with uncertain photon number. Both methods enabled reconstruction of the source’s Gaussian spatial distribution by measurement of the interference visibility as a function of baseline separation and then applying the van Cittert-Zernike theorem [3,4]. In both cases, good qualitative agreement was found with the reconstructed source width and the known source width as measured using a camera. We also report an increase of signal-to-noise per ‘faux’ stellar photon detected when heralding the idler photon. 1593 heralded (non-local oscillator) detection events led to a maximum visibility of ~17% compared to the 10412 unheralded (classical local oscillator) detection events, which gave rise to a maximum visibility of ~10% – the first instance of quantum-enhanced sensing in this context.more » « less
An official website of the United States government
