The paradigm of differentiable programming has significantly enhanced the scope of machine learning via the judicious use of gradient-based optimization. However, standard differentiable programming methods (such as autodiff) typically require that the machine learning models be differentiable, limiting their applicability. Our goal in this paper is to use a new, principled approach to extend gradient-based optimization to functions well modeled by splines, which encompass a large family of piecewise polynomial models. We derive the form of the (weak) Jacobian of such functions and show that it exhibits a block-sparse structure that can be computed implicitly and efficiently. Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable" layer''in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis. We also open-source the code at\url {https://github. com/idealab-isu/DSA}.
more »
« less
Rose: Composable Autodiff for the Interactive Web
Reverse-mode automatic differentiation (autodiff) has been popularized by deep learning, but its ability to compute gradients is also valuable for interactive use cases such as bidirectional computer-aided design, embedded physics simulations, visualizing causal inference, and more. Unfortunately, the web is ill-served by existing autodiff frameworks, which use autodiff strategies that perform poorly on dynamic scalar programs, and pull in heavy dependencies that would result in unacceptable webpage sizes. This work introduces Rose, a lightweight autodiff framework for the web using a new hybrid approach to reverse-mode autodiff, blending conventional tracing and transformation techniques in a way that uses the host language for metaprogramming while also allowing the programmer to explicitly define reusable functions that comprise a larger differentiable computation. We demonstrate the value of the Rose design by porting two differentiable physics simulations, and evaluate its performance on an optimization-based diagramming application, showing Rose outperforming the state-of-the-art in web-based autodiff by multiple orders of magnitude.
more »
« less
- Award ID(s):
- 2119007
- PAR ID:
- 10542036
- Editor(s):
- Aldrich, Jonathan; Salvaneschi, Guido
- Publisher / Repository:
- Schloss Dagstuhl – Leibniz-Zentrum für Informatik
- Date Published:
- Volume:
- 313
- ISSN:
- 1868-8969
- ISBN:
- 978-3-95977-341-6
- Page Range / eLocation ID:
- 313-313
- Subject(s) / Keyword(s):
- Automatic differentiation differentiable programming compilers web Software and its engineering → Compilers Information systems → Web applications Software and its engineering → Domain specific languages Computing methodologies → Symbolic and algebraic manipulation Software and its engineering → Formal language definitions General and reference → Performance Computing methodologies → Neural networks General and reference → General conference proceedings
- Format(s):
- Medium: X Size: 27 pages; 2081033 bytes Other: application/pdf
- Size(s):
- 27 pages 2081033 bytes
- Right(s):
- Creative Commons Attribution 4.0 International license; info:eu-repo/semantics/openAccess
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply ``autodiff'', is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names ``dynamic computational graphs'' and ``differentiable programming''. We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms ``autodiff'', ``automatic differentiation'', and ``symbolic differentiation'' as these are encountered more and more in machine learning settings.more » « less
-
We conducted an experiment to evaluate the effects on fieldwork outcomes and interview mode of switching to a web-first mixed-mode data collection design (self-administered web interview and interviewer-administered telephone interview) from a telephone-only design. We examine whether the mixed-mode option leads to better survey outcomes, based on response rates, fieldwork outcomes, interview quality and costs. We also examine respondent characteristics associated with completing a web interview rather than a telephone interview. Our mode experiment study was conducted in the 2019 wave of the Transition into Adulthood Supplement (TAS) to the US Panel Study of Income Dynamics (PSID). TAS collects information biennially from approximately 3,000 young adults in PSID families. The shift to a mixed-mode design for TAS was aimed at reducing costs and increasing respondent cooperation. We found that for mixed-mode cases compared to telephone only cases, response rates were higher, interviews were completed faster and with lower effort, the quality of the interview data appeared better, and fieldwork costs were lower. A clear set of respondent characteristics reflecting demographic and socioeconomic characteristics, technology availability and use, time use, and psychological health were associated with completing a web interview rather than a telephone interview.more » « less
-
Abstract Mathematically representing the shape of an object is a key ingredient for solving inverse rendering problems. Explicit representations like meshes are efficient to render in a differentiable fashion but have difficulties handling topology changes. Implicit representations like signed‐distance functions, on the other hand, offer better support of topology changes but are much more difficult to use for physics‐based differentiable rendering. We introduce a new physics‐based inverse rendering pipeline that uses both implicit and explicit representations. Our technique enjoys the benefit of both representations by supporting both topology changes and differentiable rendering of complex effects such as environmental illumination, soft shadows, and interreflection. We demonstrate the effectiveness of our technique using several synthetic and real examples.more » « less
-
We present a generalized constitutive model for versatile physics simulation of inviscid fluids, Newtonian viscosity, hyperelasticity, viscoplasticity, elastoplasticity, and other physical effects that arise due to a mixture of these behaviors. The key ideas behind our formulation are the design of a generalized Kirchhoff stress tensor that can describe hyperelasticity, Newtonian viscosity and inviscid fluids, and the use of pre-projection and post-correction rules for simulating material behaviors that involve plasticity, including elastoplasticity and viscoplasticity. We show how our generalized Kirchhoff stress tensor can be coupled together into a generalized constitutive model that allows the simulation of diverse material behaviors by only changing parameter values. We present several side-by-side comparisons with physics simulations for specific constitutive models to show that our generalized model produces visually similar results. More notably, our formulation allows for inverse learning of unknown material properties directly from data using differentiable physics simulations. We present several 3D simulations to highlight the robustness of our method, even with multiple different materials. To the best of our knowledge, our approach is the first to recover the knowledge of unknown material properties without making explicit assumptions about the data.more » « less
An official website of the United States government

