skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Automatic Differentiable Procedural Modeling
Abstract Procedural modeling allows for an automatic generation of large amounts of similar assets, but there is limited control over the generated output. We address this problem by introducing Automatic Differentiable Procedural Modeling (ADPM). The forward procedural model generates a final editable model. The user modifies the output interactively, and the modifications are transferred back to the procedural model as its parameters by solving an inverse procedural modeling problem. We present an auto‐differentiable representation of the procedural model that significantly accelerates optimization. In ADPM the procedural model is always available, all changes are non‐destructive, and the user can interactively model the 3D object while keeping the procedural representation. ADPM provides the user with precise control over the resulting model comparable to non‐procedural interactive modeling. ADPM is node‐based, and it generates hierarchical 3D scene geometry converted to a differentiable computational graph. Our formulation focuses on the differentiability of high‐level primitives and bounding volumes of components of the procedural model rather than the detailed mesh geometry. Although this high‐level formulation limits the expressiveness of user edits, it allows for efficient derivative computation and enables interactivity. We designed a new optimizer to solve for inverse procedural modeling. It can detect that an edit is under‐determined and has degrees of freedom. Leveraging cheap derivative evaluation, it can explore the region of optimality of edits and suggest various configurations, all of which achieve the requested edit differently. We show our system's efficiency on several examples, and we validate it by a user study.  more » « less
Award ID(s):
1816514
PAR ID:
10367720
Author(s) / Creator(s):
 ;  ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Computer Graphics Forum
Volume:
41
Issue:
2
ISSN:
0167-7055
Page Range / eLocation ID:
p. 289-307
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Procedural modeling has produced amazing results, yet fundamental issues such as controllability and limited user guidance persist. We introduce a novel procedural system called PICO (Procedural Iterative Constrained Optimizer) using PICO-Graph, a procedural model designed with optimization in mind. PICO enables the exploration of generative designs by combining user and environmental constraints into a single framework and using optimization without the need to write procedural rules. The PICO-Graph is a data-flow procedural model consisting of a set of geometry-generating operation nodes. The forward generation is initiated by sending geometric objects from initial nodes. These objects travel through the graph, triggering generation of more objects along the way. We combine the PICO-Graph with evolutionary optimization that allows for exploration of the generated models and the generation of variants. The user defines the geometry-generating operations and the set of constraints; e.g, whether an existing object should be supported by the generated model, whether symmetries exist, etc. PICO then generates geometric models that fulfill the constraints through optimization, allowing interactive user control of constraints. We show PICO on a variety of examples, including generation of procedural chairs, generation of support structures for 3D printing, or generation of procedural terrains matching a given input. 
    more » « less
  2. Procedural modeling is now the de facto standard of material modeling in industry. Procedural models can be edited and are easily extended, unlike pixel-based representations of captured materials. In this article, we present a semi-automatic pipeline for general material proceduralization. Given Spatially Varying Bidirectional Reflectance Distribution Functions (SVBRDFs) represented as sets of pixel maps, our pipeline decomposes them into a tree of sub-materials whose spatial distributions are encoded by their associated mask maps. This semi-automatic decomposition of material maps progresses hierarchically, driven by our new spectrum-aware material matting and instance-based decomposition methods. Each decomposed sub-material is proceduralized by a novel multi-layer noise model to capture local variations at different scales. Spatial distributions of these sub-materials are modeled either by a by-example inverse synthesis method recovering Point Process Texture Basis Functions (PPTBF) [ 30 ] or via random sampling. To reconstruct procedural material maps, we propose a differentiable rendering-based optimization that recomposes all generated procedures together to maximize the similarity between our procedural models and the input material pixel maps. We evaluate our pipeline on a variety of synthetic and real materials. We demonstrate our method’s capacity to process a wide range of material types, eliminating the need for artist designed material graphs required in previous work [ 38 , 53 ]. As fully procedural models, our results expand to arbitrary resolution and enable high-level user control of appearance. 
    more » « less
  3. The ability to edit 3D assets with natural language presents a compelling paradigm to aid in the democratization of 3D content creation. However, while natural language is often effective at communicating general intent, it is poorly suited for specifying exact manipulation. To address this gap, we introduce ParSEL, a system that enablescontrollableediting of high-quality 3D assets with natural language. Given a segmented 3D mesh and an editing request, ParSEL produces aparameterizedediting program. Adjusting these parameters allows users to explore shape variations with exact control over the magnitude of the edits. To infer editing programs which align with an input edit request, we leverage the abilities of large-language models (LLMs). However, we find that although LLMs excel at identifying the initial edit operations, they often fail to infer complete editing programs, resulting in outputs that violate shape semantics. To overcome this issue, we introduce Analytical Edit Propagation (AEP), an algorithm which extends a seed edit with additional operations until a complete editing program has been formed. Unlike prior methods, AEP searches for analytical editing operations compatible with a range of possible user edits through the integration of computer algebra systems for geometric analysis. Experimentally, we demonstrate ParSEL's effectiveness in enabling controllable editing of 3D objects through natural language requests over alternative system designs. 
    more » « less
  4. Umetani, N.; Wojtan, C.; Vouga, E. (Ed.)
    Most non-photorealistic rendering (NPR) methods for line drawing synthesis operate on a static shape. They are not tailored to process animated 3D models due to extensive per-frame parameter tuning needed to achieve the intended look and natural transition. This paper introduces a framework for interactive line drawing synthesis from animated 3D models based on a learned style space for drawing representation and interpolation. We refer to style as the relationship between stroke placement in a line drawing and its corresponding geometric properties. Starting from a given sequence of an animated 3D character, a user creates drawings for a set of keyframes. Our system embeds the raster drawings into a latent style space after they are disentangled from the underlying geometry. By traversing the latent space, our system enables a smooth transition between the input keyframes. The user may also edit, add, or remove the keyframes interactively, similar to a typical keyframe-based workflow. We implement our system with deep neural networks trained on synthetic line drawings produced by a combination of NPR methods. Our drawing-specific supervision and optimization-based embedding mechanism allow generalization from NPR line drawings to user-created drawings during run time. Experiments show that our approach generates high-quality line drawing animations while allowing interactive control of the drawing style across frames. 
    more » « less
  5. We introduce SLANG.D, an extension to the Slang shading language that incorporates first-class automatic differentiation support. The new shading language allows us to transform a Direct3D-based path tracer to be fully differentiable with minor modifications to existing code. SLANG.D enables a shared ecosystem between machine learning frameworks and pre-existing graphics hardware API-based rendering systems, promoting the interchange of components and ideas across these two domains. Our contributions include a differentiable type system designed to ensure type safety and semantic clarity in codebases that blend differentiable and non-differentiable code, language primitives that automatically generate both forward and reverse gradient propagation methods, and a compiler architecture that generates efficient derivative propagation shader code for graphics pipelines. Our compiler supports differentiating code that involves arbitrary control-flow, dynamic dispatch, generics and higher-order differentiation, while providing developers flexible control of checkpointing and gradient aggregation strategies for best performance. Our system allows us to differentiate an existing real-time path tracer, Falcor, with minimal change to its shader code. We show that the compiler-generated derivative kernels perform as efficiently as handwritten ones. In several benchmarks, the SLANG.D code achieves significant speedup when compared to prior automatic differentiation systems. 
    more » « less