Optical diffraction tomography (ODT) is an indispensable tool for studying objects in three dimensions. Until now, ODT has been limited to coherent light because spatial phase information is required to solve the inverse scattering problem. We introduce a method that enables ODT to be applied to imaging incoherent contrast mechanisms such as fluorescent emission. Our strategy mimics the coherent scattering process with two spatially coherent illumination beams. The interferometric illumination pattern encodes spatial phase in temporal variations of the fluorescent emission, thereby allowing incoherent fluorescent emission to mimic the behavior of coherent illumination. The temporal variations permit recovery of the spatial distribution of fluorescent emission with an inverse scattering model. Simulations and experiments demonstrate isotropic resolution in the 3D reconstruction of a fluorescent object.
more »
« less
VarIS: Variable Illumination Sphere for Facial Capture, Model Scanning, and Spatially Varying Appearance Acquisition
We introduce VarIS, our Variable Illumination Sphere – a multi-purpose system for acquiring and processing real-world geometric and appearance data for computer-graphics research and production. Its key applications among many are (1) human-face capture, (2) model scanning, and (3) spatially varying material acquisition. Facial capture requires high-resolution cameras at multiple viewpoints, photometric capabilities, and a swift process due to human movement. Acquiring a digital version of a physical model is somewhat similar but with different constraints for image processing and more allowable time. Each requires detailed estimations of geometry and physically based shading properties. Measuring spatially varying light-scattering properties requires spanning four dimensions of illumination and viewpoint with angular, spatial, and spectral accuracy, and this process can also be assisted using multiple, simultaneous viewpoints or rapid switching of lights with no movement necessary. VarIS is a system of hardware and software for spherical illumination and imaging that has been custom designed and developed by our team. It has been inspired by Light Stages and goniophotometers, but costs less through use of primarily off-the-shelf components, and additionally extends capabilities beyond these devices. In this paper we describe the unique system and contributions, including practical details that could assist other researchers and practitioners.
more »
« less
- Award ID(s):
- 2007974
- PAR ID:
- 10503415
- Editor(s):
- Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, Gilda
- Publisher / Repository:
- The Eurographics Association
- Date Published:
- Journal Name:
- Smart Tools and Applications in Graphics - Eurographics Italian Chapter Conference
- ISSN:
- 2617-4855
- ISBN:
- 978-3-03868-235-6
- Subject(s) / Keyword(s):
- CCS Concepts: Computing methodologies -> Reflectance modeling 3D imaging Hardware -> Emerging optical and photonic technologies Applied computing -> Media arts Keywords: Facial Capture, Photogrammetry, Material Acquisition, Appearance Capture Computing methodologies Reflectance modeling 3D imaging Hardware Emerging optical and photonic technologies Applied computing Media arts Keywords Facial Capture Photogrammetry Material Acquisition Appearance Capture
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Vedaldi, Andrea; Bischof, Horst; Brox, Thomas; Frahm, Jan-Michael (Ed.)Novel view video synthesis aims to synthesize novel viewpoints videos given input captures of a human performance taken from multiple reference viewpoints and over consecutive time steps. Despite great advances in model-free novel view synthesis, existing methods present three limitations when applied to complex and time-varying human performance. First, these methods (and related datasets) mainly consider simple and symmetric objects. Second, they do not enforce explicit consistency across generated views. Third, they focus on static and non-moving objects. The fine-grained details of a human subject can therefore suffer from inconsistencies when synthesized across different viewpoints or time steps. To tackle these challenges, we introduce a human-specific framework that employs a learned 3D-aware representation. Specifically, we first introduce a novel siamese network that employs a gating layer for better reconstruction of the latent volumetric representation and, consequently, final visual results. Moreover, features from consecutive time steps are shared inside the network to improve temporal consistency. Second, we introduce a novel loss to explicitly enforce consistency across generated views both in space and in time. Third, we present the Multi-View Human Action (MVHA) dataset, consisting of near 1200 synthetic human performance captured from 54 viewpoints. Experiments on the MVHA, Pose-Varying Human Model and ShapeNet datasets show that our method outperforms the state-of-the-art baselines both in view generation quality and spatio-temporal consistency.more » « less
-
The traffic congestion hits most big cities in the world - threatening long delays and serious reductions in air quality. City and local government officials continue to face challenges in optimizing crowd flow, synchronizing traffic and mitigating threats or dangerous situations. One of the major challenges faced by city planners and traffic engineers is developing a robust traffic controller that eliminates traffic congestion and imbalanced traffic flow at intersections. Ensuring that traffic moves smoothly and minimizing the waiting time in intersections requires automated vehicle detection techniques for controlling the traffic light automatically, which are still challenging problems. In this paper, we propose an intelligent traffic pattern collection and analysis model, named TPCAM, based on traffic cameras to help in smooth vehicular movement on junctions and set to reduce the traffic congestion. Our traffic detection and pattern analysis model aims at detecting and calculating the traffic flux of vehicles and pedestrians at intersections in real-time. Our system can utilize one camera to capture all the traffic flows in one intersection instead of multiple cameras, which will reduce the infrastructure requirement and potential for easy deployment. We propose a new deep learning model based on YOLOv2 and adapt the model for the traffic detection scenarios. To reduce the network burdens and eliminate the deployment of network backbone at the intersections, we propose to process the traffic video data at the network edge without transmitting the big data back to the cloud. To improve the processing frame rate at the edge, we further propose deep object tracking algorithm leveraging adaptive multi-modal models and make it robust to object occlusions and varying lighting conditions. Based on the deep learning based detection and tracking, we can achieve pseudo-30FPS via adaptive key frame selection.more » « less
-
Many animals exhibit structural colors, which are often iridescent, meaning that the perceived colors change with illumination conditions and viewing perspectives. Biological iridescence is usually caused by multilayers or other periodic structures in animal tissues, which selectively reflect light of certain wavelengths and often result in a shiny appearance---which almost always comes with spatially varying highlights, thanks to randomness and irregularities in the structures. Previous models for biological iridescence tend to each target one specific structure, and most models only compute large-area averages, overlooking spatial variation in iridescent appearance. In this work, we build appearance models for biological iridescence using bird feathers as our case study, investigating different types of feathers with a variety of structural coloration mechanisms. We propose an approximate wave simulation method that takes advantage of quasi-regular structures while efficiently modeling the effects of natural structural irregularities. We further propose a method to distill our simulation results into distributions of BRDFs, generated using noise functions, that preserve relevant statistical properties of the simulated BRDFs. This allows us to model the spatially varying, glittery appearance commonly seen on feathers. Our BRDFs are practical and efficient, and we present renderings of multiple types of iridescent feathers with comparisons to photographic images.more » « less
-
Plasmonic photocatalysis presents a promising method for light-to-matter conversion. However, most current studies focused on understanding the relative importance of thermal and nonthermal effects while their synergistic effects remained less studied. Here, we propose an index, termed Overall Light Effectiveness (OLE), to capture the combined impact of these light effects on reactions. By systematically varying the thickness of catalyst layers, we isolated thermal and nonthermal contributions and optimized them to achieve maximum light enhancement. We demonstrate the approach using carbon dioxide hydrogenation reaction on titania-supported rhodium nanoparticles as a model reaction system. It shows a generalizable potential in designing catalyst systems with optimum combinations of heating and light illumination, especially with broadband light illumination such as sunlight, for achieving the most economical light-to-matter conversion in plasmonic catalysis.more » « less
An official website of the United States government

