This paper introduces a combination of regression and belief revision to allow agents to deal with inconsistencies while executing plans. Starting from an inconsistent history consisting of actions and observations, the proposed framework (1) computes the initial belief states that support the actions and observations and (2) uses a belief revision operator to repair the false initial belief state. The framework operates on domains with static causal laws and supports arbitrary sequences of actions. The paper illustrates how logic programming can be effectively used to support these processes.
Combining Intentionality and Belief: Revisiting Believable Character Plans
In this paper we present two studies supporting a plan-based model of narrative generation that reasons about both intentionality and belief. First we compare the believability of agent plans taken from the spaces of valid classical plans, intentional plans, and belief plans. We show that the plans that make the most sense to humans are those in the overlapping regions of the intentionality and belief spaces. Second, we validate the model’s approach to representing anticipation, where characters form plans that involve actions they expect other characters to take. Using a short interactive scenario we demonstrate that players not only find it believable when NPCs anticipate their actions, but sometimes actively anticipate the actions of NPCs in a way that is consistent with the model.
- Award ID(s):
- 1647427
- Publication Date:
- NSF-PAR ID:
- 10099254
- Journal Name:
- Proceedings of the 14th AAAI International Conference on Artificial Intelligence and Interactive Digital Entertainment
- Page Range or eLocation-ID:
- 222 - 228
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The Mixed-Reality Integrated Learning Environment (MILE) developed at Florida State University is a virtual reality based, inclusive and immersive e-learning environment that promotes engaging and effective learning interactions for a diversified learner population. MILE uses a large number of interactive Non-Player Characters (NPCs) to represent diverse research-based learner archetypes and groups, and to prompt and provide feedback for in situ teaching practice. The NPC scripts in MILE are written in Linden Scripting Language (LSL), and can be quite complex, creating a significant challenge in the development and maintenance of the system. To address this challenge, we develop NPC_GEN, an automatic NPC script generation tool that takes high-level NPC descriptions as input and automatically produces LSL scripts for NPCs. In this work, we introduce NPCDL, a language that we design for NPC_GEN to give high-level descriptions of NPCs, describe how NPC_GEN translates an NPCDL description into an LSL script, and report a user study of NPC_GEN. The results of our user study indicate that with minimal training, non-technical people are able to write and modify NPCDL descriptions, which can then be used to generate LSL scripts for the NPCs: the development and maintenance of NPCs is greatly simplified with NPC_GEN.
-
Decision-making under uncertainty (DMU) is present in many important problems. An open challenge is DMU in non-stationary environments, where the dynamics of the environment can change over time. Reinforcement Learning (RL), a popular approach for DMU problems, learns a policy by interacting with a model of the environment offline. Unfortunately, if the environment changes the policy can become stale and take sub-optimal actions, and relearning the policy for the updated environment takes time and computational effort. An alternative is online planning approaches such as Monte Carlo Tree Search (MCTS), which perform their computation at decision time. Given the current environment, MCTS plans using high-fidelity models to determine promising action trajectories. These models can be updated as soon as environmental changes are detected to immediately incorporate them into decision making. However, MCTS’s convergence can be slow for domains with large state-action spaces. In this paper, we present a novel hybrid decision-making approach that combines the strengths of RL and planning while mitigating their weaknesses. Our approach, called Policy Augmented MCTS (PA-MCTS), integrates a policy’s actin-value estimates into MCTS, using the estimates to seed the action trajectories favored by the search. We hypothesize that PA-MCTS will converge more quickly than standard MCTSmore »
-
Successfully navigating the social world requires reasoning about both high-level strategic goals, such as whether to cooperate or compete, as well as the low-level actions needed to achieve those goals. While previous work in experimental game theory has examined the former and work on multi-agent systems has examined the later, there has been little work investigating behavior in environments that require simultaneous planning and inference across both levels. We develop a hierarchical model of social agency that infers the intentions of other agents, strategically decides whether to cooperate or compete with them, and then executes either a cooperative or competitive planning program. Learning occurs across both high-level strategic decisions and low-level actions leading to the emergence of social norms. We test predictions of this model in multi-agent behavioral experiments using rich video-game like environments. By grounding strategic behavior in a formal model of planning, we develop abstract notions of both cooperation and competition and shed light on the computational nature of joint intentionality.
-
Intelligent interactive narrative systems coordinate a cast of non-player characters to make the overall story experience meaningful for the player. Narrative generation involves a tradeoff between plot-structure requirements and quality of character behavior, as well as computational efficiency. We study this tradeoff using the example of benchmark problems for narrative planning algorithms. A typical narrative planning problem calls for a sequence of actions that leads to an overall plot goal being met, while also requiring each action to respect constraints that create the appearance of character autonomy. We consider simplified solution definitions that enforce only plot requirements or only character requirements, and we measure how often each of these definitions leads to a solution that happens to meet both types of requirements—i.e., the density with which narrative plans occur among plot- or character-requirement-satisfying sequences. We then investigate whether solution densities can guide the selection of narrative planning algorithms. We compare the performance of two search strategies: one that satisfies plot requirements first and checks character requirements afterward, and one that continuously verifies character requirements. Our results show that comparing solution densities does not by itself predict which of these search strategies will be more efficient in terms of search nodesmore »