<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Facial Emotion Expression Corpora for Training Game Character Neural Network Models</dc:title><dc:creator>Schiffer, Sheldon; Zhang, Samantha; Levine, Max</dc:creator><dc:corporate_author/><dc:editor/><dc:description>The emergence of photorealistic and cinematic non-player character (NPC) animation presents new challenges for video game developers. Game player expectations of cinematic acting styles bring a more sophisticated aesthetic in the representation of social interaction. New methods can streamline workflow by integrating actor-driven character design into the development of game character AI and animation. A workflow that tracks actor performance to final neural network (NN) design depends on a rigorous method of
producing single-actor video corpora from which to train emotion AI NN models. While numerous video corpora have been developed to study emotion elicitation of the face from which to test theoretical models and train neural networks to recognize emotion, developing single-actor corpora to train NNs of NPCs in video games is uncommon. A class of facial emotion recognition (FER) products have enabled production of
single-actor video corpora that use emotion analysis data. This paper introduces a single-actor game character corpora workflow for game character developers. The proposed method uses a single actor video corpus and dataset with the intent to train and implement a NN in an off-the-shelf video game engine for facial animation
of an NPC. The efficacy of using a NN-driven animation controller has already been demonstrated (Schiffer, 2021, Kozasa et. al 2006). This paper focuses on using a single-actor video corpus for the purpose of training a NN-driven animation controller.</dc:description><dc:publisher/><dc:date>2022-01-01</dc:date><dc:nsf_par_id>10423957</dc:nsf_par_id><dc:journal_name>International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP)</dc:journal_name><dc:journal_volume>2</dc:journal_volume><dc:journal_issue/><dc:page_range_or_elocation>197 to 208</dc:page_range_or_elocation><dc:issn/><dc:isbn/><dc:doi>https://doi.org/10.5220/0010874700003124</dc:doi><dcq:identifierAwardId>1852516</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>