skip to main content

Title: Versatile Uncertainty Quantification of Contrastive Behaviors for Modeling Networked Anagram Games
In a networked anagram game, each team member is given a set of letters and members collectively form as many words as possible. They can share letters through a communication network in assisting their neighbors in forming words. There is variability in behaviors of players, e.g., there can be large differences in numbers of letter requests, of replies to letter requests, and of words formed among players. Therefore, it is of great importance to understand uncertainty and variability in player behaviors. In this work, we propose versatile uncertainty quantification (VUQ) of behaviors for modeling the networked anagram game. Specifically, the proposed methods focus on building contrastive models of game player behaviors that quantify player actions in terms of worst, average, and best performance. Moreover, we construct agent-based models and perform agent-based simulations using these VUQ methods to evaluate the model building methodology and understand the impact of uncertainty. We believe that this approach is applicable to other networked games.
Authors:
; ;
Award ID(s):
1916670
Publication Date:
NSF-PAR ID:
10310240
Journal Name:
Complex Networks and their Applications
Sponsoring Org:
National Science Foundation
More Like this
  1. In a group anagram game, players are provided letters to form as many words as possible. They can also request letters from their neighbors and reply to letter requests. Currently, a single agent-based model is produced from all experimental data, with dependence only on number of neighbors. In this work, we build, exercise, and evaluate enhanced agent behavior models for networked group anagram games under an uncertainty quantification framework. Specifically, we cluster game data for players based on their skill levels (forming words, requesting letters, and replying to requests), perform multinomial logistic regression for transition probabilities, and quantify uncertainty within each cluster. The result of this process is a model where players are assigned different numbers of neighbors and different skill levels in the game. We conduct simulations of ego agents with neighbors to demonstrate the efficacy of our proposed methods.
  2. In anagram games, players are provided with letters for forming as many words as possible over a specified time duration. Anagram games have been used in controlled experiments to study problems such as collective identity, effects of goal setting, internal-external attributions, test anxiety, and others. The majority of work on anagram games involves individual players. Recently, work has expanded to group anagram games where players cooperate by sharing letters. In this work, we analyze experimental data from online social networked experiments of group anagram games. We develop mechanistic and data driven models of human decision-making to predict detailed game player actions (e.g., what word to form next). With these results, we develop a composite agent-based modeling and simulation platform that incorporates the models from data analysis. We compare model predictions against experimental data, which enables us to provide explanations of human decision-making and behavior. Finally, we provide illustrative case studies using agent-based simulations to demonstrate the efficacy of models to provide insights that are beyond those from experiments alone.
  3. Anagram games (i.e., word construction games in which players use letters to form words) have been researched for some 60 years. Games with individual players are the subject of over 20 published investigations. Moreover, there are many popular commercial anagram games such as Scrabble. Recently, cooperative team play of anagram games has been studied experimentally. With all of the experimental work and the popularity of such games, it is somewhat surprising that very little modeling of anagram games has been done to predict player behavior/actions in them. We devise a cooperative group anagram game and develop an agent-based modeling and simulation framework to capture player interactions of sharing letters and forming words. Our primary goals are to understand, quantitatively predict, and explain individual and aggregate group behavior, through simulations, to inform the design of a group anagram game experimental platform.
  4. Teamwork is a set of interrelated reasoning, actions and behaviors of team members that facilitate common objectives. Teamwork theory and experiments have resulted in a set of states and processes for team effectiveness in both human-human and agent-agent teams. However, human-agent teaming is less well studied because it is so new and involves asymmetry in policy and intent not present in human teams. To optimize team performance in human-agent teaming, it is critical that agents infer human intent and adapt their polices for smooth coordination. Most literature in human-agent teaming builds agents referencing a learned human model. Though these agents are guaranteed to perform well with the learned model, they lay heavy assumptions on human policy such as optimality and consistency, which is unlikely in many real-world scenarios. In this paper, we propose a novel adaptive agent architecture in human-model-free setting on a two-player cooperative game, namely Team Space Fortress (TSF). Previous human-human team research have shown complementary policies in TSF game and diversity in human players’ skill, which encourages us to relax the assumptions on human policy. Therefore, we discard learning human models from human data, and instead use an adaptation strategy on a pre-trained library of exemplar policiesmore »composed of RL algorithms or rule-based methods with minimal assumptions of human behavior. The adaptation strategy relies on a novel similarity metric to infer human policy and then selects the most complementary policy in our library to maximize the team performance. The adaptive agent architecture can be deployed in real-time and generalize to any off-the-shelf static agents. We conducted human-agent experiments to evaluate the proposed adaptive agent framework, and demonstrated the suboptimality, diversity, and adaptability of human policies in human-agent teams.« less
  5. Modeling player engagement is a key challenge in games. However, the gameplay signatures of engaged players can be highly context-sensitive, varying based on where the game is used or what population of players is using it. Traditionally, models of player engagement are investigated in a particular context, and it is unclear how effectively these models generalize to other settings and populations. In this work, we investigate a Bayesian hierarchical linear model for multi-task learning to devise a model of player engagement from a pair of datasets that were gathered in two complementary contexts: a Classroom Study with middle school students and a Laboratory Study with undergraduate students. Both groups of players used similar versions of Crystal Island, an educational interactive narrative game for science learning. Results indicate that the Bayesian hierarchical model outperforms both pooled and context-specific models in cross-validation measures of predicting player motivation from in-game behaviors, particularly for the smaller Classroom Study group. Further, we find that the posterior distributions of model parameters indicate that the coefficient for a measure of gameplay performance significantly differs between groups. Drawing upon their capacity to share information across groups, hierarchical Bayesian methods provide an effective approach for modeling player engagement withmore »data from similar, but different, contexts.« less