We present a method for reconstructing 3D shape of arbitrary Lambertian objects based on measurements by miniature, energy-efficient, low-cost single-photon cameras. These cameras, operating as time resolved image sensors, illuminate the scene with a very fast pulse of diffuse light and record the shape of that pulse as it returns back from the scene at a high temporal resolution. We propose to model this image formation process, account for its non-idealities, and adapt neural rendering to reconstruct 3D geometry from a set of spatially distributed sensors with known poses. We show that our approach can successfully recover complex 3D shapes from simulated data. We further demonstrate 3D object reconstruction from real-world captures, utilizing measurements from a commodity proximity sensor. Our work draws a connection between image-based modeling and active range scanning and is a step towards 3D vision with single-photon cameras.
more »
« less
3D Scene Inference from Transient Histograms
Time-resolved image sensors that capture light at pico-tonanosecond timescales were once limited to niche applications but are now rapidly becoming mainstream in consumer devices. We propose lowcost and low-power imaging modalities that capture scene information from minimal time-resolved image sensors with as few as one pixel. The key idea is to flood illuminate large scene patches (or the entire scene) with a pulsed light source and measure the time-resolved reflected light by integrating over the entire illuminated area. The one-dimensional measured temporal waveform, called transient, encodes both distances and albedoes at all visible scene points and as such is an aggregate proxy for the scene’s 3D geometry. We explore the viability and limitations of the transient waveforms by themselves for recovering scene information, and also when combined with traditional RGB cameras. We show that plane estimation can be performed from a single transient and that using only a few more it is possible to recover a depth map of the whole scene. We also show two proof-of-concept hardware prototypes that demonstrate the feasibility of our approach for compact, mobile, and budget-limited applications.
more »
« less
- Award ID(s):
- 1943149
- PAR ID:
- 10525612
- Publisher / Repository:
- Springer (European Conference on Computer Vision - ECCV)
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Multidimensional photography can capture optical fields beyond the capability of conventional image sensors that measure only two-dimensional (2D) spatial distribution of light. By mapping a high-dimensional datacube of incident light onto a 2D image sensor, multidimensional photography resolves the scene along with other information dimensions, such as wavelength and time. However, the application of current multidimensional imagers is fundamentally restricted by their static optical architectures and measurement schemes—the mapping relation between the light datacube voxels and image sensor pixels is fixed. To overcome this limitation, we propose tunable multidimensional photography through active optical mapping. A high-resolution spatial light modulator, referred to as an active optical mapper, permutes and maps the light datacube voxels onto sensor pixels in an arbitrary and programmed manner. The resultant system can readily adapt the acquisition scheme to the scene, thereby maximising the measurement flexibility. Through active optical mapping, we demonstrate our approach in two niche implementations: hyperspectral imaging and ultrafast imaging.more » « less
-
Digital camera pixels measure image intensities by converting incident light energy into an analog electrical current, and then digitizing it into a fixed-width binary representation. This direct measurement method, while conceptually simple, suffers from limited dynamic range and poor performance under extreme illumination — electronic noise dominates under low illumination, and pixel full-well capacity results in saturation under bright illumination. We propose a novel intensity cue based on measuring inter-photon timing, defined as the time delay between detection of successive photons. Based on the statistics of inter-photon times measured by a time-resolved single-photon sensor, we develop theory and algorithms for a scene brightness estimator which works over extreme dynamic range; we experimentally demonstrate imaging scenes with a dynamic range of over ten million to one. The proposed techniques, aided by the emergence of single-photon sensors such as single-photon avalanche diodes (SPADs) with picosecond timing resolution, will have implications for a wide range of imaging applications: robotics, consumer photography, astronomy, microscopy and biomedical imaging.more » « less
-
Steady-state fluorescence spectroscopy has a central role not only for sensing applications, but also in biophysics and imaging. Light switching probes, such as ruthenium dipyridophenazine complexes, have been used to study complex systems such as DNA, RNA, and amyloid fibrils. Nonetheless, steady-state spectroscopy is limited in the kind of information it can provide. In this paper, we use time-resolved spectroscopy for studying binding interactions between amyloid-β fibrillar structures and photoluminescent ligands. Using time-resolved spectroscopy, we demonstrate that ruthenium complexes with a pyrazino phenanthroline derivative can bind to two distinct binding sites on the surface of fibrillar amyloid-β, in contrast with previous studies using steady-state photoluminescence spectroscopy, which only identified one binding site for similar compounds. The second elusive binding site is revealed when deconvoluting the signals from the time-resolved decay traces, allowing the determination of dissociation constants of 3 and 2.2 μM. Molecular dynamic simulations agree with two binding sites on the surface of amyloid-β fibrils. Time-resolved spectroscopy was also used to monitor the aggregation of amyloid-β in real-time. In addition, we show that common polypyridine complexes can bind to amyloid-β also at two different binding sites. Information on how molecules bind to amyloid proteins is important to understand their toxicity and to design potential drugs that bind and quench their deleterious effects. The additional information contained in time-resolved spectroscopy provides a powerful tool not only for studying excited state dynamics but also for sensing and revealing important information about the system including hidden binding sites.more » « less
-
In this work, we propose a novel method to supervise 3D Gaussian Splatting (3DGS) scenes using optical tactile sensors. Optical tactile sensors have become widespread in their use in robotics for manipulation and object representation; however, raw optical tactile sensor data is unsuitable to directly supervise a 3DGS scene. Our representation leverages a Gaussian Process Implicit Surface to implicitly represent the object, combining many touches into a unified representation with uncertainty. We merge this model with a monocular depth estimation network, which is aligned in a two stage process, coarsely aligning with a depth camera and then finely adjusting to match our touch data. For every training image, our method produces a corresponding fused depth and uncertainty map. Utilizing this additional information, we propose a new loss function, variance weighted depth supervised loss, for training the 3DGS scene model. We leverage the DenseTact optical tactile sensor and RealSense RGB-D camera to show that combining touch and vision in this manner leads to quantitatively and qualitatively better results than vision or touch alone in a few-view scene syntheses on opaque as well as on reflective and transparent objects. Please see our project page at armlabstanford.github.io/touch-gsmore » « less
An official website of the United States government

