skip to main content


Search for: All records

Creators/Authors contains: "Liu, C."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available October 1, 2025
  2. Free, publicly-accessible full text available October 1, 2025
  3. Free, publicly-accessible full text available October 1, 2025
  4. Free, publicly-accessible full text available October 1, 2025
  5. Free, publicly-accessible full text available May 1, 2025
  6. Free, publicly-accessible full text available July 1, 2025
  7. Free, publicly-accessible full text available May 1, 2025
  8. Modeling human behaviors in contextual environments has a wide range of applications in character animation, embodied AI, VR/AR, and robotics. In real-world scenarios, humans frequently interact with the environment and manipulate various objects to complete daily tasks. In this work, we study the problem of full-body human motion synthesis for the manipulation of large-sized objects. We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework that can generate full-body manipulation behaviors from only the object motion. Since naively applying diffusion models fails to precisely enforce contact constraints between the hands and the object, OMOMO learns two separate denoising processes to first predict hand positions from object motion and subsequently synthesize full-body poses based on the predicted hand positions. By employing the hand positions as an intermediate representation between the two denoising processes, we can explicitly enforce contact constraints, resulting in more physically plausible manipulation motions. With the learned model, we develop a novel system that captures full-body human manipulation motions by simply attaching a smartphone to the object being manipulated. Through extensive experiments, we demonstrate the effectiveness of our proposed pipeline and its ability to generalize to unseen objects. Additionally, as high-quality human-object interaction datasets are scarce, we collect a large-scale dataset consisting of 3D object geometry, object motion, and human motion. Our dataset contains human-object interaction motion for 15 objects, with a total duration of approximately 10 hours. 
    more » « less
  9. Free, publicly-accessible full text available January 1, 2026
  10. Abstract We develop a thin-film microstructural model that represents structural markers (i.e., triple junctions in the two-dimensional projections of the structure of films with columnar grains) in terms of a stochastic, marked point process and the microstructure itself in terms of a grain-boundary network. The advantage of this representation is that it is conveniently applicable to the characterization of microstructures obtained from crystal orientation mapping, leading to a picture of an ensemble of interacting triple junctions, while providing results that inform grain-growth models with experimental data. More specifically, calculated quantities such as pair, partial pair and mark correlation functions, along with the microstructural mutual information (entropy), highlight effective triple junction interactions that dictate microstructural evolution. To validate this approach, we characterize microstructures from Al thin films via crystal orientation mapping and formulate an approach, akin to classical density functional theory, to describe grain growth that embodies triple-junction interactions. 
    more » « less