skip to main content


This content will become publicly available on July 1, 2024

Title: Efficient Path‐Space Differentiable Volume Rendering With Respect To Shapes
Abstract Differentiable rendering of translucent objects with respect to their shapes has been a long‐standing problem. State‐of‐the‐art methods require detecting object silhouettes or specifying change rates inside translucent objects—both of which can be expensive for translucent objects with complex shapes. In this paper, we address this problem for translucent objects with no refractive or reflective boundaries. By reparameterizing interior components of differential path integrals, our new formulation does not require change rates to be specified in the interior of objects. Further, we introduce new Monte Carlo estimators based on this formulation that do not require explicit detection of object silhouettes.  more » « less
Award ID(s):
1900927
NSF-PAR ID:
10451964
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Computer Graphics Forum
Volume:
42
Issue:
4
ISSN:
0167-7055
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider a category-level perception problem, where one is given 3D sensor data picturing an object of a given category (e.g., a car), and has to reconstruct the pose and shape of the object despite intra-class variability (i.e., different car models have different shapes). We consider an active shape model, where —for an object category— we are given a library of potential CAD models describing objects in that category, and we adopt a standard formulation where pose and shape estimation are formulated as a non-convex optimization. Our first contribution is to provide the first certifiably optimal solver for pose and shape estimation. In particular, we show that rotation estimation can be decoupled from the estimation of the object translation and shape, and we demonstrate that (i) the optimal object rotation can be computed via a tight (small-size) semidefinite relaxation, and (ii) the translation and shape parameters can be computed in closed-form given the rotation. Our second contribution is to add an outlier rejection layer to our solver, hence making it robust to a large number of misdetections. Towards this goal, we wrap our optimal solver in a robust estimation scheme based on graduated non-convexity. To further enhance robustness to outliers, we also develop the first graph-theoretic formulation to prune outliers in category-level perception, which removes outliers via convex hull and maximum clique computations; the resulting approach is robust to 70 − 90% outliers. Our third contribution is an extensive experimental evaluation. Besides providing an ablation study on a simulated dataset and on the PASCAL3D+ dataset, we combine our solver with a deep-learned keypoint detector, and show that the resulting approach improves over the state of the art in vehicle pose estimation in the ApolloScape datasets. 
    more » « less
  2. Introduction: With the capture of the first high- resolution, in-situ images of Near-Earth Objects (NEOs) a couple of decades ago [1–4], the ubiquity of regolith and the granular nature of small objects in the Solar System became apparent. Benefiting from an increased access to high computing power, new numerical studies emerged, modeling granular structures forming and evolving as small bodies in the Solar System [5–7]. Now adding laboratory studies on granular material strength for asteroid and other small body applications [8,9], we are steadily progressing in our understanding of how regolith is shaping the interiors and surfaces of these worlds. In addition, our ever-more powerful observation capabilities are uncovering interesting dust-related phenomena in the outer skirts of our Solar System, in the form of activity at large heliocentric distances and rings [10–12]. We find that our recent progress in understanding the behavior of granular material in small body environments also has applications to the more distant worlds of Centaurs and Trans-Neptunian Objects (TNOs). Internal Strength: We currently deduce internal friction of rubble piles from the observation of large numbers of small asteroids and their rotation rates, combined with the associated numerical simulations [13,14]. In the laboratory, we study internal friction of simulant materials using shear strength measurements [8]. Combining observations, modeling, and laboratory work, the picture emerges of rubble pile interiors being composed of coarse grains in the mm to cm range. The irregular shapes of the grains lead to mechanical interlocking, thus generating the internal friction required to match observations of the asteroid population [8,9]. We find that the presence of a fine fraction in the confined interior of a rubble pile actually leads weaker internal strength [9]. Surface Strength: Deducing surface regolith strength for NEOs is usually performed via average slope measurements [15–17] or, most notably, observing the outcome of an impact of known energy [18]. In the laboratory, we measure the angle of repose of simulant material via pouring tests, as well as its bulk cohesion using shear strength measurements [8]. In some cases, this allows us to infer grain size ranges for various regions of the surface and subsurface of pictured NEOs, beyond the resolution of their in-situ images. Surface Activity: The Rosetta mission revealed that a number of activity events on comet 67P/Churyumov–Gerasimenko were linked to active surface geology, most notably avalanches and cliff collapses [19]. In addition, the role of regolith strength in asteroid disruption patterns has been inferred from numerical simulations of rotating rubble piles [20]. By studying strength differences in simulant samples, it becomes apparent that a difference in cohesion between a surface and its subsurface layer can lead to activity events with surface mass shedding, without the presence of volatiles sublimating as a driver [8]. We show that such differences in surface strength can be brought upon by a depletion in fine grains or a change in composition (e.g. depletion in water ice) and could account for regular activity patterns on small bodies, independently of their distance to the Sun. This is of particular interest to the study of Centaur activity and a potential mechanism for feeding ring systems. 
    more » « less
  3. Inverse rendering is a powerful approach to modeling objects from photographs, and we extend previous techniques to handle translucent materials that exhibit subsurface scattering. Representing translucency using a heterogeneous bidirectional scattering-surface reflectance distribution function (BSSRDF), we extend the framework of path-space differentiable rendering to accommodate both surface and subsurface reflection. This introduces new types of paths requiring new methods for sampling moving discontinuities in material space that arise from visibility and moving geometry. We use this differentiable rendering method in an end-to-end approach that jointly recovers heterogeneous translucent materials (represented by a BSSRDF) and detailed geometry of an object (represented by a mesh) from a sparse set of measured 2D images in a coarse-to-fine framework incorporating Laplacian preconditioning for the geometry. To efficiently optimize our models in the presence of the Monte Carlo noise introduced by the BSSRDF integral, we introduce a dual-buffer method for evaluating the L2 image loss. This efficiently avoids potential bias in gradient estimation due to the correlation of estimates for image pixels and their derivatives and enables correct convergence of the optimizer even when using low sample counts in the renderer. We validate our derivatives by comparing against finite differences and demonstrate the effectiveness of our technique by comparing inverse-rendering performance with previous methods. We show superior reconstruction quality on a set of synthetic and real-world translucent objects as compared to previous methods that model only surface reflection. 
    more » « less
  4. null (Ed.)
    Material and biological sciences frequently generate large amounts of microscope data that require 3D object level segmentation. Often, the objects of interest have a common geometry, for example spherical, ellipsoidal, or cylindrical shapes. Neural networks have became a popular approach for object detection but they are often limited by their training dataset and have difficulties adapting to new data. In this paper, we propose a volumetric object detection approach for microscopy volumes comprised of fibrous structures by using deep centroid regression and geometric regularization. To this end, we train encoder-decoder networks for segmentation and centroid regression. We use the regression information combined with prior system knowledge to propose cylindrical objects and enforce geometric regularization in the segmentation. We train our networks on synthetic data and then test the trained networks in several experimental datasets. Our approach shows competitive results against other 3D segmentation methods when tested on the synthetic data and outperforms those other methods across different datasets. 
    more » « less
  5. We present a new weakly supervised learning-based method for generating novel category-specific 3D shapes from unoccluded image collections. Our method is weakly supervised and only requires silhouette annotations from unoccluded, category-specific objects. Our method does not require access to the object's 3D shape, multiple observations per object from different views, intra-image pixel correspondences, or any view annotations. Key to our method is a novel multi-projection generative adversarial network (MP-GAN) that trains a 3D shape generator to be consistent with multiple 2D projections of the 3D shapes, and without direct access to these 3D shapes. This is achieved through multiple discriminators that encode the distribution of 2D projections of the 3D shapes seen from a different views. Additionally, to determine the view information for each silhouette image, we also train a view prediction network on visualizations of 3D shapes synthesized by the generator. We iteratively alternate between training the generator and training the view prediction network. We validate our multi-projection GAN on both synthetic and real image datasets. Furthermore, we also show that multi-projection GANs can aid in learning other high-dimensional distributions from lower dimensional training datasets, such as material-class specific spatially varying reflectance properties from images. 
    more » « less