skip to main content

Title: Scoring Integrated Meaningful Play in Story-Driven Games with Q-Learning
Integrated meaningful play is the idea that player’s choices should have a long-term effect on the game. In this paper we present I-score (for integrated), a scoring function for scoring integrative game play as a function of the game’s storylines. The I-scores are in the range [0,1]. In games with I-scores close to one, player’s early choices determine the game’s ending; choices made later in the game do not change the final ending of the game. In contrast, games with I-scores close to zero, players’s choices can change the ending until the very end. Games with scores closer to 0.5 provide a more balanced player choice whereby the game’s ending still can be changed despite early decisions, but not so much that the ending could be changed at any point.
; ;
Osborn, Joseph C.
Award ID(s):
Publication Date:
Journal Name:
CEUR workshop proceedings
Page Range or eLocation-ID:
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract We introduce a new model of repeated games in large populations with random matching, overlapping generations, and limited records of past play. We prove that steady-state equilibria exist under general conditions on records. When the updating of a player’s record can depend on the actions of both players in a match, any strictly individually rational action can be supported in a steady-state equilibrium. When record updates can depend only on a player’s own actions, fewer actions can be supported. Here, we focus on the prisoner’s dilemma and restrict attention to strict equilibria that are coordination-proof, meaning that matched partners never play a Pareto-dominated Nash equilibrium in the one-shot game induced by their records and expected continuation payoffs. Such equilibria can support full cooperation if the stage game is either “strictly supermodular and mild” or “strongly supermodular,” and otherwise permit no cooperation at all. The presence of “supercooperator” records, where a player cooperates against any opponent, is crucial for supporting any cooperation when the stage game is “severe.”
  2. Regret minimization has proved to be a versatile tool for tree- form sequential decision making and extensive-form games. In large two-player zero-sum imperfect-information games, mod- ern extensions of counterfactual regret minimization (CFR) are currently the practical state of the art for computing a Nash equilibrium. Most regret-minimization algorithms for tree-form sequential decision making, including CFR, require (i) an exact model of the player’s decision nodes, observation nodes, and how they are linked, and (ii) full knowledge, at all times t, about the payoffs—even in parts of the decision space that are not encountered at time t. Recently, there has been growing interest towards relaxing some of those restric- tions and making regret minimization applicable to settings for which reinforcement learning methods have traditionally been used—for example, those in which only black-box access to the environment is available. We give the first, to our knowl- edge, regret-minimization algorithm that guarantees sublinear regret with high probability even when requirement (i)—and thus also (ii)—is dropped. We formalize an online learning setting in which the strategy space is not known to the agent and gets revealed incrementally whenever the agent encoun- ters new decision points. We give an efficient algorithm that achieves O(T 3/4)more »regret with high probability for that setting, even when the agent faces an adversarial environment. Our experiments show it significantly outperforms the prior algo- rithms for the problem, which do not have such guarantees. It can be used in any application for which regret minimization is useful: approximating Nash equilibrium or quantal response equilibrium, approximating coarse correlated equilibrium in multi-player games, learning a best response, learning safe opponent exploitation, and online play against an unknown opponent/environment.« less
  3. Detection and responding to a player’s affect are important for serious games. A method for this purpose was tested within Chem-o-crypt, a game that teaches chemical equation balancing. The game automatically detects boredom, flow, and frustration using the Affdex SDK from Affectiva. The sensed affective state is then used to adapt the game play in an attempt to engage the player in the game. A randomized controlled experiment incorporating a Dynamic Bayesian Network that compared results from groups with the affect-sensitive states vs those without revealed that measuring affect and adapting the game improved learning for low domain-knowledge participants.
  4. In recent years, various mechanisms have been proposed to optimize players’ emotional experience. In this paper, we focus on suspense, one of the key emotions in gameplay. Most previous research on suspense management in games focused on narratives. Instead, we propose a new computational model of Suspense for Non-Narrative Gameplay (SNNG). SNNG is built around a Player Suspense Model (PSM) with three key factors: hope, fear, and uncertainty. These three factors are modeled as three sensors that can be triggered by particular game objects (e.g., NPCs) and game mechanics (e.g., health). A player’s feeling of suspense can be adjusted by altering the level of hope, fear, and uncertainty. Therefore, an SNNG-enhanced game engine could manage a player’s level of suspense by adding or removing game objects, diverting NPCs, adjusting game mechanics, and giving or withholding information. We tested our model by integrating SNNG into a Pacman game. Our preliminary experiment with nine subjects was encouraging.
  5. Choice poetics is a formalist framework that seeks to concretely describe the impacts choices have on player experiences within narrative games. Developed in part to support algorithmic generation of narrative choices, the theory includes a detailed analytical framework for understanding the impressions choice structures make by analyzing the relationships among options, outcomes, and player goals. The theory also emphasizes the need to account for players’ various modes of engagement, which vary both during play and between players. In this work, we illustrate the non-computational application of choice poetics to the analysis of two different games to further develop the theory and make it more accessible to others. We focus first on using choice poetics to examine the central repeated choice in “Undertale,” and show how it can be used to contrast two different player types that will approach a choice differently. Finally, we give an example of fine-grained analysis using a choice from the game “Papers, Please,” which breaks down options and their outcomes to illustrate exactly how the choice pushes players towards complicity via the introduction of uncertainty. Through all of these examples, we hope to show the usefulness of choice poetics as a framework for understanding narrative choices,more »and to demonstrate concretely how one could productively apply it to choices “in the wild.”« less