In a networked anagram game, each team member is given a set of letters and members collectively form as many words as possible. They can share letters through a communication network in assisting their neighbors in forming words. There is variability in behaviors of players, e.g., there can be large differences in numbers of letter requests, of replies to letter requests, and of words formed among players. Therefore, it is of great importance to understand uncertainty and variability in player behaviors. In this work, we propose versatile uncertainty quantification (VUQ) of behaviors for modeling the networked anagram game. Specifically, the proposed methods focus on building contrastive models of game player behaviors that quantify player actions in terms of worst, average, and best performance. Moreover, we construct agent-based models and perform agent-based simulations using these VUQ methods to evaluate the model building methodology and understand the impact of uncertainty. We believe that this approach is applicable to other networked games. 
                        more » 
                        « less   
                    
                            
                            Generating Levels That Teach Mechanics
                        
                    
    
            The automatic generation of game tutorials is a challenging AI problem. While it is possible to generate annotations and instructions that explain to the player how the game is played, this paper focuses on generating a gameplay experience that introduces the player to a game mechanic. It evolves small levels for the Mario AI Framework that can only be beaten by an agent that knows how to perform specific actions in the game. It uses variations of a perfect A* agent that are limited in various ways, such as not being able to jump high or see enemies, to test how failing to do certain actions can stop the player from beating the level. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1717324
- PAR ID:
- 10132610
- Date Published:
- Journal Name:
- FDG Workshop on Procedural Content Generation
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            When learning in strategic environments, a key question is whether agents can overcome uncertainty about their preferences to achieve outcomes they could have achieved absent any uncertainty. Can they do this solely through interactions with each other? We focus this question on the ability of agents to attain the value of their Stackelberg optimal strategy and study the impact of information asymmetry. We study repeated interactions in fully strategic environments where players' actions are decided based on learning algorithms that take into account their observed histories and knowledge of the game. We study the pure Nash equilibria (PNE) of a meta-game where players choose these algorithms as their actions. We demonstrate that if one player has perfect knowledge about the game, then any initial informational gap persists. That is, while there is always a PNE in which the informed agent achieves her Stackelberg value, there is a game where no PNE of the meta-game allows the partially informed player to achieve her Stackelberg value. On the other hand, if both players start with some uncertainty about the game, the quality of information alone does not determine which agent can achieve her Stackelberg value. In this case, the concept of information asymmetry becomes nuanced and depends on the game's structure. Overall, our findings suggest that repeated strategic interactions alone cannot facilitate learning effectively enough to earn an uninformed player her Stackelberg value.more » « less
- 
            When learning in strategic environments, a key question is whether agents can overcome uncertainty about their preferences to achieve outcomes they could have achieved absent any uncertainty. Can they do this solely through interactions with each other? We focus this question on the ability of agents to attain the value of their Stackelberg optimal strategy and study the impact of information asymmetry. We study repeated interactions in fully strategic environments where players' actions are decided based on learning algorithms that take into account their observed histories and knowledge of the game. We study the pure Nash equilibria (PNE) of a meta-game where players choose these algorithms as their actions. We demonstrate that if one player has perfect knowledge about the game, then any initial informational gap persists. That is, while there is always a PNE in which the informed agent achieves her Stackelberg value, there is a game where no PNE of the meta-game allows the partially informed player to achieve her Stackelberg value. On the other hand, if both players start with some uncertainty about the game, the quality of information alone does not determine which agent can achieve her Stackelberg value. In this case, the concept of information asymmetry becomes nuanced and depends on the game's structure. Overall, our findings suggest that repeated strategic interactions alone cannot facilitate learning effectively enough to earn an uninformed player her Stackelberg value.more » « less
- 
            In group anagram games, players cooperate to form words by sharing letters that they are initially given. The aim is to form as many words as possible as a group, within five minutes. Players take several different actions: requesting letters from their neighbors, replying to letter requests, and forming words. Agent-based models (ABMs) for the game compute likelihoods of each player’s next action, which contain uncertainty, as they are estimated from experimental data. We adopt a Bayesian approach as a natural means of quantifying uncertainty, to enhance the ABM for the group anagram game. Specifically, a Bayesian nonparametric clustering method is used to group player behaviors into different clusters without pre-specifying the number of clusters. Bayesian multi nominal regression is adopted to model the transition probabilities among different actions of the players in the ABM. We describe the methodology and the benefits of it, and perform agent-based simulations of the game.more » « less
- 
            Understanding players' mental models are crucial for game designers who wish to successfully integrate player-AI interactions into their game. However, game designers face the difficult challenge of anticipating how players model these AI agents during gameplay and how they may change their mental models with experience. In this work, we conduct a qualitative study to examine how a pair of players develop mental models of an adversarial AI player during gameplay in the multiplayer drawing game iNNk. We conducted ten gameplay sessions in which two players (n = 20, 10 pairs) worked together to defeat an AI player. As a result of our analysis, we uncovered two dominant dimensions that describe players' mental model development (i.e., focus and style). The first dimension describes the focus of development which refers to what players pay attention to for the development of their mental model (i.e., top-down vs. bottom-up focus). The second dimension describes the differences in the style of development, which refers to how players integrate new information into their mental model (i.e., systematic vs. reactive style). In our preliminary framework, we further note how players process a change when a discrepancy occurs, which we observed occur through comparisons (i.e., compare to other systems, compare to gameplay, compare to self). We offer these results as a preliminary framework for player mental model development to help game designers anticipate how different players may model adversarial AI players during gameplay.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    