skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Lifted Curls: A Model for Tightly Coiled Hair Simulation
We present an isotropic, hyperelastic model specifically designed for the efficient simulation of tightly coiled hairs whose curl radii approach 5 mm. Our model is robust to large bends and torsions, even when they appear at the scale of the strand discretization. The terms of our model are consistently quadratic with respect to their primary variables, do not require per-edge frames or any parallel transport operators, and can efficiently take large timesteps on the order of 1/30 of a second. Additionally, we show that it is possible to obtain fast, closed-form eigensystems for all the terms in the energy. Our eigenanalysis is sufficiently generic that it generalizes to other models. Our entirely vertex-based formulation integrates naturally with existing finite element codes, and we demonstrate its efficiency and robustness in a variety of scenarios.  more » « less
Award ID(s):
2132280
PAR ID:
10431374
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Ye, Yuting; Wang Huamin
Date Published:
Journal Name:
Proceedings of the ACM on computer graphics and interactive techniques
ISSN:
2577-6193
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Understanding the biological functions of proteins is of fundamental importance in modern biology. To represent a function of proteins, Gene Ontology (GO), a controlled vocabulary, is frequently used, because it is easy to handle by computer programs avoiding open-ended text interpretation. Particularly, the majority of current protein function prediction methods rely on GO terms. However, the extensive list of GO terms that describe a protein function can pose challenges for biologists when it comes to interpretation. In response to this issue, we developed GO2Sum (Gene Ontology terms Summarizer), a model that takes a set of GO terms as input and generates a human-readable summary using the T5 large language model. GO2Sum was developed by fine-tuning T5 on GO term assignments and free-text function descriptions for UniProt entries, enabling it to recreate function descriptions by concatenating GO term descriptions. Our results demonstrated that GO2Sum significantly outperforms the original T5 model that was trained on the entire web corpus in generating Function, Subunit Structure, and Pathway paragraphs for UniProt entries. 
    more » « less
  2. It is usually assumed that interaction potentials, in general, and atom-surface potential, in particular, can be expressed in terms of an expansion involving integer powers of the distance between the two interacting objects. Here, we show that, in the short-range expansion of the interaction potential of a neutral atom and a dielectric surface, logarithms of the atom-wall distance appear. These logarithms are accompanied with logarithmic sums over virtual excitations of the atom interacting with the surface in analogy to Bethe logarithms in quantum electrodynamics. We verify the presence of the logarithmic terms in the short-range expansion using a model problem with realistic parameters. By contrast, in the long-range expansion of the atom-surface potential, no logarithmic terms appear, and the interaction potential can be described by an expansion in inverse integer powers of the atom-wall distance. Several subleading terms in the large-distance expansion are obtained as a byproduct of our investigations. Our findings explain why the use of simple interpolating rational functions for the description of the atom-wall interaction in the intermediate regions leads to significant deviations from exact formulas. 
    more » « less
  3. Generative models such as Large Language Models (LLM) and Multimodal Large Language models (MLLMs) trained on massive web corpora can memorize and disclose individuals’ confidential and private data, raising legal and ethical concerns. While many previous works have addressed this issue in LLM via machine unlearning, it remains largely unexplored for MLLMs. To tackle this challenge, we introduce Multimodal Large Language Model Unlearning Benchmark (MLLMU-Bench), a novel benchmark aimed at advancing the understanding of multimodal machine unlearning. MLLMU-Bench consists of 500 fictitious profiles and 153 profiles for public celebrities, each profile feature over 14 customized question-answer pairs, evaluated from both multimodal (image+text) and unimodal (text) perspectives. The benchmark is divided into four sets to assess unlearning algorithms in terms of efficacy, generalizability, and model utility. Finally, we provide baseline results using existing generative model unlearning algorithms. Surprisingly, our experiments show that unimodal unlearning algorithms excel in generation tasks, while multimodal unlearning approaches perform better in classification with multimodal inputs. 
    more » « less
  4. Stochastic model checking is a technique for analyzing systems that possess probabilistic characteristics. However, its scalability is limited as probabilistic models of real-world applications typically have very large or infinite state space. This paper presents a new infinite state CTMC model checker, STAMINA, with improved scalability. It uses a novel state space approximation method to reduce large and possibly infinite state CTMC models to finite state representations that are amenable to existing stochastic model checkers. It is integrated with a new property-guided state expansion approach that improves the analysis accuracy. Demonstration of the tool on several benchmark examples shows promising results in terms of analysis efficiency and accuracy compared with a state-of-the-art CTMC model checker that deploys a similar approximation method. 
    more » « less
  5. The large number of antennas in massive MIMO systems allows the base station to communicate with multiple users at the same time and frequency resource with multi-user beamforming. However, highly correlated user channels could drastically impede the spectral efficiency that multi-user beamforming can achieve. As such, it is critical for the base station to schedule a suitable group of users in each time and frequency resource block to achieve maximum spectral efficiency while adhering to fairness constraints among the users. In this paper, we consider the resource scheduling problem for massive MIMO systems with its optimal solution known to be NP-hard. Inspired by recent achievements in deep reinforcement learning (DRL) to solve problems with large action sets, we propose \name{}, a dynamic scheduler for massive MIMO based on the state-of-the-art Soft Actor-Critic (SAC) DRL model and the K-Nearest Neighbors (KNN) algorithm. Through comprehensive simulations using realistic massive MIMO channel models as well as real-world datasets from channel measurement experiments, we demonstrate the effectiveness of our proposed model in various channel conditions. Our results show that our proposed model performs very close to the optimal proportionally fair (Opt-PF) scheduler in terms of spectral efficiency and fairness with more than one order of magnitude lower computational complexity in medium network sizes where Opt-PF is computationally feasible. Our results also show the feasibility and high performance of our proposed scheduler in networks with a large number of users and resource blocks. 
    more » « less