skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on November 19, 2025

Title: Multi-Resolution Real-Time Deep Pose-Space Deformation
We present a hard-real-time multi-resolution mesh shape deformation technique for skeleton-driven soft-body characters. Producing mesh deformations at multiple levels of detail is very important in many applications in computer graphics. Our work targets applications where the multi-resolution shapes must be generated at fast speeds (“hard-real-time”, e.g., a few milliseconds at most and preferably under 1 millisecond), as commonly needed in computer games, virtual reality and Metaverse applications. We assume that the character mesh is driven by a skeleton, and that high-quality character shapes are available in a set of training poses originating from a high-quality (slow) rig such as volumetric FEM simulation. Our method combines multi-resolution analysis, mesh partition of unity, and neural networks, to learn the pre-skinning shape deformations in an arbitrary character pose. Combined with linear blend skinning, this makes it possible to reconstruct the training shapes, as well as interpolate and extrapolate them. Crucially, we simultaneously achieve this at hard real-time rates and at multiple mesh resolution levels. Our technique makes it possible to trade deformation quality for memory and computation speed, to accommodate the strict requirements of modern real-time systems. Furthermore, we propose memory layout and code improvements to boost computation speeds. Previous methods for real-time approximations of quality shape deformations did not focus on hard real-time, or did not investigate the multi-resolution aspect of the problem. Compared to a "naive" approach of separately processing each hierarchical level of detail, our method offers a substantial memory reduction as well as computational speedups. It also makes it possible to construct the shape progressively level by level and interrupt the computation at any time, enabling graceful degradation of the deformation detail. We demonstrate our technique on several examples, including a stylized human character, human hands, and an inverse-kinematics-driven quadruped animal.  more » « less
Award ID(s):
1911224
PAR ID:
10555211
Author(s) / Creator(s):
;
Publisher / Repository:
ACM
Date Published:
Journal Name:
ACM Transactions on Graphics
ISSN:
1557-7368
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present RigNet, an end-to-end automated method for producing animation rigs from input character models. Given an input 3D model representing an articulated character, RigNet predicts a skeleton that matches the animator expectations in joint placement and topology. It also estimates surface skin weights based on the predicted skeleton. Our method is based on a deep architecture that directly operates on the mesh representation without making assumptions on shape class and structure. The architecture is trained on a large and diverse collection of rigged models, including their mesh, skeletons and corresponding skin weights. Our evaluation is three-fold: we show better results than prior art when quantitatively compared to animator rigs; qualitatively we show that our rigs can be expressively posed and animated at multiple levels of detail; and finally, we evaluate the impact of various algorithm choices on our output rigs. 
    more » « less
  2. Abstract Physically based simulation is often combined with geometric mesh animation to add realistic soft‐body dynamics to virtual characters. This is commonly done using constraint‐based simulation whereby a soft‐tissue simulation is constrained to geometric animation of a subpart (or otherwise proxy representation) of the character. We observe that standard constraint‐based simulation suffers from an important flaw that limits the expressiveness of soft‐body dynamics. Namely, under correct physics, the frequency and amplitude of soft‐tissue dynamics arising from constraints (“inertial amplitude”) are coupled, and cannot be adjusted independently merely by adjusting the material properties of the model. This means that the space of physically based simulations is inherently limited and cannot capture all effects typically expected by computer animators. For example, animators need the ability to adjust the frequency, inertial amplitude, gravity sag and damping properties of the virtual character, independently from each other, as these are the primary visual characteristics of the soft‐tissue dynamics. We demonstrate that independence can be achieved by transforming the equations of motion into a non‐inertial reference coordinate frame, then scaling the resulting inertial forces, and then converting the equations of motion back to the inertial frame. Such scaling of inertia makes it possible for the animator to set the character's inertial amplitude independently from frequency. We also provide exact controls for the amount of character's gravity sag, and the damping properties. In our examples, we use linear blend skinning and pose‐space deformation for geometric mesh animation, and the Finite Element Method for soft‐body constrained simulation; but our idea of scaling inertial forces is general and applicable to other animation and simulation methods. We demonstrate our technique on several character examples. 
    more » « less
  3. Realistic simulation of the intricate wing deformations seen in flying insects not only deepens our comprehension of insect fight mechanics but also opens up numerous applications in fields such as computer animation and virtual reality. Despite its importance, this research area has been relatively under-explored due to the complex and diverse wing structures and the intricate patterns of deformation. This paper presents an efficient skeleton-driven model specifically designed to real-time simulate realistic wing deformations across a wide range of flying insects. Our approach begins with the construction of a virtual skeleton that accurately reflects the distinct morphological characteristics of individual insect species. This skeleton serves as the foundation for the simulation of the intricate deformation wave propagation often observed in wing deformations. To faithfully reproduce the bending effect seen in these deformations, we introduce both internal and external forces that act on the wing joints, drawing on periodic wing-beat motion and a simplified aerodynamics model. Additionally, we utilize mass- spring algorithms to simulate the inherent elasticity of the wings, helping to prevent excessive twisting. Through various simulation experiments, comparisons, and user studies, we demonstrate the effectiveness, robustness, and adaptability of our model. 
    more » « less
  4. Neural Radiance Field (NeRF) has emerged as a leading technique for novel view synthesis, owing to its impressive photorealistic reconstruction and rendering capability. Nevertheless, achieving real-time NeRF rendering in large-scale scenes has presented challenges, often leading to the adoption of either intricate baked mesh representations with a substantial number of triangles or resource-intensive ray marching in baked representations. We challenge these conventions, observing that high-quality geometry, represented by meshes with substantial triangles, is not necessary for achieving photorealistic rendering quality. Consequently, we propose MixRT, a novel NeRF representation that includes a low-quality mesh, a view-dependent displacement map, and a compressed NeRF model. This design effectively harnesses the capabilities of existing graphics hardware, thus enabling real-time NeRF rendering on edge devices. Leveraging a highly-optimized WebGL-based rendering framework, our proposed MixRT attains real-time rendering speeds on edge devices (over 30 FPS at a resolution of 1280 x 720 on a MacBook M1 Pro laptop), better rendering quality (0.2 PSNR higher in indoor scenes of the Unbounded-360 datasets), and a smaller storage size (less than 80% compared to state-of-the-art methods). 
    more » « less
  5. null (Ed.)
    We propose a novel technique for producing high-quality 3D models that match a given target object image or scan. Our method is based on retrieving an existing shape from a database of 3D models and then deforming its parts to match the target shape. Unlike previous approaches that independently focus on either shape retrieval or deformation, we propose a joint learning procedure that simultaneously trains the neural deformation module along with the embedding space used by the retrieval module. This enables our network to learn a deformation-aware embedding space, so that retrieved models are more amenable to match the target after an appropriate deformation. In fact, we use the embedding space to guide the shape pairs used to train the deformation module, so that it invests its capacity in learning deformations between meaningful shape pairs. Furthermore, our novel part-aware deformation module can work with inconsistent and diverse part-structures on the source shapes. We demonstrate the benefits of our joint training not only on our novel framework, but also on other state-of-the-art neural deformation modules proposed in recent years. Lastly, we also show that our jointly-trained method outperforms various non-joint baselines. 
    more » « less