skip to main content

Title: Approaches to Uncertainty Quantification in Building Models of Human Behavior
In a group anagram game, players are provided letters to form as many words as possible. They can also request letters from their neighbors and reply to letter requests. Currently, a single agent-based model is produced from all experimental data, with dependence only on number of neighbors. In this work, we build, exercise, and evaluate enhanced agent behavior models for networked group anagram games under an uncertainty quantification framework. Specifically, we cluster game data for players based on their skill levels (forming words, requesting letters, and replying to requests), perform multinomial logistic regression for transition probabilities, and quantify uncertainty within each cluster. The result of this process is a model where players are assigned different numbers of neighbors and different skill levels in the game. We conduct simulations of ego agents with neighbors to demonstrate the efficacy of our proposed methods.
; ;
Award ID(s):
Publication Date:
Journal Name:
Winter Simulation Conference
Sponsoring Org:
National Science Foundation
More Like this
  1. In a networked anagram game, each team member is given a set of letters and members collectively form as many words as possible. They can share letters through a communication network in assisting their neighbors in forming words. There is variability in behaviors of players, e.g., there can be large differences in numbers of letter requests, of replies to letter requests, and of words formed among players. Therefore, it is of great importance to understand uncertainty and variability in player behaviors. In this work, we propose versatile uncertainty quantification (VUQ) of behaviors for modeling the networked anagram game. Specifically, the proposed methods focus on building contrastive models of game player behaviors that quantify player actions in terms of worst, average, and best performance. Moreover, we construct agent-based models and perform agent-based simulations using these VUQ methods to evaluate the model building methodology and understand the impact of uncertainty. We believe that this approach is applicable to other networked games.
  2. In anagram games, players are provided with letters for forming as many words as possible over a specified time duration. Anagram games have been used in controlled experiments to study problems such as collective identity, effects of goal setting, internal-external attributions, test anxiety, and others. The majority of work on anagram games involves individual players. Recently, work has expanded to group anagram games where players cooperate by sharing letters. In this work, we analyze experimental data from online social networked experiments of group anagram games. We develop mechanistic and data driven models of human decision-making to predict detailed game player actions (e.g., what word to form next). With these results, we develop a composite agent-based modeling and simulation platform that incorporates the models from data analysis. We compare model predictions against experimental data, which enables us to provide explanations of human decision-making and behavior. Finally, we provide illustrative case studies using agent-based simulations to demonstrate the efficacy of models to provide insights that are beyond those from experiments alone.
  3. Anagram games (i.e., word construction games in which players use letters to form words) have been researched for some 60 years. Games with individual players are the subject of over 20 published investigations. Moreover, there are many popular commercial anagram games such as Scrabble. Recently, cooperative team play of anagram games has been studied experimentally. With all of the experimental work and the popularity of such games, it is somewhat surprising that very little modeling of anagram games has been done to predict player behavior/actions in them. We devise a cooperative group anagram game and develop an agent-based modeling and simulation framework to capture player interactions of sharing letters and forming words. Our primary goals are to understand, quantitatively predict, and explain individual and aggregate group behavior, through simulations, to inform the design of a group anagram game experimental platform.
  4. Teamwork is a set of interrelated reasoning, actions and behaviors of team members that facilitate common objectives. Teamwork theory and experiments have resulted in a set of states and processes for team effectiveness in both human-human and agent-agent teams. However, human-agent teaming is less well studied because it is so new and involves asymmetry in policy and intent not present in human teams. To optimize team performance in human-agent teaming, it is critical that agents infer human intent and adapt their polices for smooth coordination. Most literature in human-agent teaming builds agents referencing a learned human model. Though these agents are guaranteed to perform well with the learned model, they lay heavy assumptions on human policy such as optimality and consistency, which is unlikely in many real-world scenarios. In this paper, we propose a novel adaptive agent architecture in human-model-free setting on a two-player cooperative game, namely Team Space Fortress (TSF). Previous human-human team research have shown complementary policies in TSF game and diversity in human players’ skill, which encourages us to relax the assumptions on human policy. Therefore, we discard learning human models from human data, and instead use an adaptation strategy on a pre-trained library of exemplar policiesmore »composed of RL algorithms or rule-based methods with minimal assumptions of human behavior. The adaptation strategy relies on a novel similarity metric to infer human policy and then selects the most complementary policy in our library to maximize the team performance. The adaptive agent architecture can be deployed in real-time and generalize to any off-the-shelf static agents. We conducted human-agent experiments to evaluate the proposed adaptive agent framework, and demonstrated the suboptimality, diversity, and adaptability of human policies in human-agent teams.« less
  5. Video game tutorials allow players to gain mastery over game skills and mechanics. To hone players’ skills, it is beneficial from practicing in environments that promote individ- ual player skill sets. However, automatically generating environ- ments which are mechanically similar to one-another is a non- trivial problem. This paper presents a level generation method for Super Mario by stitching together pre-generated “scenes” that contain specific mechanics, using mechanic-sequences from agent playthroughs as input specifications. Given a sequence of mechanics, the proposed system uses an FI-2Pop algorithm and a corpus of scenes to perform automated level authoring. The proposed system outputs levels that can be beaten using a similar mechanical sequence to the target mechanic sequence but with a different playthrough experience. We compare the proposed system to a greedy method that selects scenes that maximize the number of matched mechanics. Unlike the greedy approach, the proposed system is able to maximize the number of matched mechanics while reducing emergent mechanics using the stitching process.