Color composition (or color theme) is a key factor to determine how well a piece of art work or graphical design is perceived by humans. Despite a few color harmony models have been proposed, their results are often less satisfactory since they mostly neglect the variations of aesthetic cognition among individuals and treat the influence of all ratings equally as if they were all rated by the same anonymous user. To overcome this issue, in this article we propose a new color theme evaluation model by combining a back propagation neural network and a kernel probabilistic model to infer both the color theme rating and the user aesthetic preference. Our experiment results show that our model can predict more accurate and personalized color theme ratings than state of the art methods. Our work is also the first-of-its-kind effort to quantitatively evaluate the correlation between user aesthetic preferences and color harmonies of five-color themes, and study such a relation for users with different aesthetic cognition.
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available July 31, 2025
-
Realistic simulation of the intricate wing deformations seen in flying insects not only deepens our comprehension of insect fight mechanics but also opens up numerous applications in fields such as computer animation and virtual reality. Despite its importance, this research area has been relatively under-explored due to the complex and diverse wing structures and the intricate patterns of deformation. This paper presents an efficient skeleton-driven model specifically designed to real-time simulate realistic wing deformations across a wide range of flying insects. Our approach begins with the construction of a virtual skeleton that accurately reflects the distinct morphological characteristics of individual insect species. This skeleton serves as the foundation for the simulation of the intricate deformation wave propagation often observed in wing deformations. To faithfully reproduce the bending effect seen in these deformations, we introduce both internal and external forces that act on the wing joints, drawing on periodic wing-beat motion and a simplified aerodynamics model. Additionally, we utilize mass- spring algorithms to simulate the inherent elasticity of the wings, helping to prevent excessive twisting. Through various simulation experiments, comparisons, and user studies, we demonstrate the effectiveness, robustness, and adaptability of our model.more » « lessFree, publicly-accessible full text available July 13, 2025
-
In recent years, the field of crowd simulation has experienced significant advancements, attributed in part to the improvement of hardware performance, coupled with a notable emphasis on agent-based characteristics. Agent-based simulations stand out as the preferred methodology when researchers seek to model agents with unique behavioral traits and purpose-driven actions, a crucial aspect for simulating diverse and realistic crowd movements. This survey adopts a systematic approach, meticulously delving into the array of factors vital for simulating a heterogeneous microscopic crowd. The emphasis is placed on scrutinizing low-level behavioral details and individual features of virtual agents to capture a nuanced understanding of their interactions. The survey is based on studies published in reputable peer-reviewed journals and conferences. The primary aim of this survey is to present the diverse advancements in the realm of agent-based crowd simulations, with a specific emphasis on the various aspects of agent behavior that researchers take into account when developing crowd simulation models. Additionally, the survey suggests future research directions with the objective of developing new applications that focus on achieving more realistic and efficient crowd simulations.more » « lessFree, publicly-accessible full text available July 1, 2025
-
We present “Double Doodles” to make full use of two sequential inputs of a VR controller with 9 DOFs in total, 3 DOFs of the first input sequence for the generation of motion paths and 6 DOFs of the second input sequence for motion gestures. While engineering our system, we take ergonomics into consideration and design a set of user-defined motion gestures to describe character motions. We employ a real-time deep learning-based approach for highly accurate motion gesture classification. We then integrate our approach into a prototype system, and it allows users to directly create character animations in VR environments using motion gestures with a VR controller, followed by animation preview and animation inter- active editing. Finally, we evaluate the feasibility and effectiveness of our system through a user study, demonstrating the usefulness of our system for visual storytelling dedicated to amateurs, as well as for providing fast drafting tools for artists.more » « less
-
This paper presents a computational study to analyze and predict turns (i.e., turn-taking and turn-keeping) in multiparty conversations. Specifically, we use a high-fidelity hybrid data acquisition system to capture a large-scale set of multi-modal natural conversational behaviors of interlocutors in three-party conversations, including gazes, head movements, body movements, speech, etc. Based on the inter-pausal units (IPUs) extracted from the in-house acquired dataset, we propose a transformer-based computational model to predict the turns based on the interlocutor states (speaking/back-channeling/silence) and the gaze targets. Our model can robustly achieve more than 80% accuracy, and the generalizability of our model was extensively validated through cross-group experiments. Also, we introduce a novel computational metric called “relative engagement level" (REL) of IPUs, and further validate its statistical significance between turn-keeping IPUs and turn-taking IPUs, and between different conversational groups. Our experimental results also found that the patterns of the interlocutor states can be used as a more effective cue than their gaze behaviors for predicting turns in multiparty conversations.more » « less
-
Simulating realistic butterfly motion has been a widely-known challenging problem in computer animation. Arguably, one of its main reasons is the difficulty of acquiring accurate flight motion of butterflies. In this paper we propose a practical yet effective, optical marker-based approach to capture and process the detailed motion of a flying butterfly. Specifically, we first capture the trajectories of the wings and thorax of a flying butterfly using optical marker based motion tracking. After that, our method automatically fills the positions of missing markers by exploiting the continuity and relevance of neighboring frames, and improves the quality of the captured motion via noise filtering with optimized parameter settings. Through comparisons with existing motion processing methods, we demonstrate the effectiveness of our approach to obtain accurate flight motions of butterflies. Furthermore, we created and will release a first-of-its-kind butterfly motion capture dataset to research community.more » « less
-
In this paper we propose a novel conditional generative adversarial network (cGAN) architecture, called S2M-Net, to holistically synthesize realistic three-party conversational animations based on acoustic speech input together with speaker marking (i.e., the speak- ing time of each interlocutor). Specifically, based on a pre-collected three-party conversational motion dataset, we design and train the S2M-Net for three-party conversational animation synthesis. In the architecture, a generator contains a LSTM encoder to encode a sequence of acoustic speech features to a latent vector that is further fed into a transform unit to transform the latent vector into a gesture kinematics space. Then, the output of this transform unit is fed into a LSTM decoder to generate corresponding three-party conversational gesture kinematics. Meanwhile, a discriminator is implemented to check whether an input sequence of three-party conversational gesture kinematics is real or fake. To evaluate our method, besides quantitative and qualitative evaluations, we also conducted paired comparison user studies to compare it with the state of the art.more » « less
-
Recovering 3D face models from in-the-wild face images has numerous potential applications. However, properly modeling complex lighting effects in reality, including specular lighting, shadows, and occlusions, from a single in-the-wild face image is still considered as a widely open research challenge. In this paper, we propose a convolutional neural network based framework to regress the face model from a single image in the wild. The outputted face model includes dense 3D shape, head pose, expression, diffuse albedo, specular albedo, and the corresponding lighting conditions. Our approach uses novel hybrid loss functions to disentangle face shape identities, expressions, poses, albedos, and lighting. Besides a carefully designed ablation study, we also conduct direct comparison experiments to show that our method can outperform state-of-art methods both quantitatively and qualitatively.more » « less
-
Butterflies are not only ubiquitous around the world but are also widely known for inspiring thrill resonance, with their elegant and peculiar flights. However, realistically modeling and simulating butterfly flights—in particular, for real-time graphics and animation applications—remains an under-explored problem. In this article, we propose an efficient and practical model to simulate butterfly flights. We first model a butterfly with parametric maneuvering functions, including wing-abdomen interaction. Then, we simulate dynamic maneuvering control of the butterfly through our force-based model, which includes both the aerodynamics force and the vortex force. Through many simulation experiments and comparisons, we demonstrate that our method can efficiently simulate realistic butterfly flight motions in various real-world settings.more » « less