Narrative data spans all disciplines and provides a coherent model of the world to the reader or viewer. Recent advancement in machine learning and Large Language Models (LLMs) have enable great strides in analyzing natural language. However, Large language models (LLMs) still struggle with complex narrative arcs as well as narratives containing conflicting information. Recent work indicates LLMs augmented with external knowledge bases can improve the accuracy and interpretability of the resulting models. In this work, we analyze the effectiveness of applying knowledge graphs (KGs) in understanding true-crime podcast data from both classical Natural Language Processing (NLP) and LLM approaches. We directly compare KG-augmented LLMs (KGLLMs) with classical methods for KG construction, topic modeling, and sentiment analysis. Additionally, the KGLLM allows us to query the knowledge base in natural language and test its ability to factually answer questions. We examine the robustness of the model to adversarial prompting in order to test the model's ability to deal with conflicting information. Finally, we apply classical methods to understand more subtle aspects of the text such as the use of hearsay and sentiment in narrative construction and propose future directions. Our results indicate that KGLLMs outperform LLMs on a variety of metrics, are more robust to adversarial prompts, and are more capable of summarizing the text into topics.
more »
« less
This content will become publicly available on April 15, 2026
Language Models as Narrative Planning Heuristics
Narrative planning is the process of generating sequences of actions that form coherent and goal-oriented narratives. Classical implementations of narrative planning rely on heuristic search techniques to offer structured story generation but face challenges with scalability due to large branching factors and deep search requirements. Large Language Models (LLMs), with their extensive training on diverse linguistic datasets, excel in understanding and generating coherent narratives. However, their planning ability lacks the precision and structure needed for effective narrative planning. This paper explores a hybrid approach that uses LLMs as heuristic guides within classical search frameworks for narrative planning. We compare various prompt designs to generate LLM heuristic predictions and evaluate their performance against h+, hmax, and relaxed plan heuristics. Additionally, we analyze the ability of relaxed plans to predict the next action correctly, comparing it to the LLMs’ ability to make the same prediction. Our findings indicate that LLMs rarely exceed the accuracy of classical planning heuristics.
more »
« less
- Award ID(s):
- 2145153
- PAR ID:
- 10589031
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400718564
- Page Range / eLocation ID:
- 1 to 9
- Subject(s) / Keyword(s):
- Narrative Planning, Heuristic Search, Large Language Models, Neuro-Symbolic
- Format(s):
- Medium: X
- Location:
- Vienna & Graz Austria
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Interactive narrative in games utilize a combination of dynamic adaptability and predefined story elements to support player agency and enhance player engagement. However, crafting such narratives requires significant manual authoring and coding effort to translate scripts to playable game levels. Advances in pretrained large language models (LLMs) have introduced the opportunity to procedurally generate narratives. This paper presents NarrativeGenie, a framework to generate narrative beats as a cohesive, partially ordered sequence of events that shapes narrative progressions from brief natural language instructions. By leveraging LLMs for reasoning and generation, NarrativeGenie, translates a designer’s story overview into a partially ordered event graph to enable player-driven narrative beat sequencing. Our findings indicate that NarrativeGenie can provide an easy and effective way for designers to generate an interactive game episode with narrative events that align with the intended story arc while at the same time granting players agency in their game experience. We extend our framework to dynamically direct the narrative flow by adapting real-time narrative interactions based on the current game state and player actions. Results demonstrate that NarrativeGenie generates narratives that are coherent and aligned with the designer’s vision.more » « less
-
The events in a narrative are understood as a coherent whole via the underlying states of their participants. Often, these participant states are not explicitly mentioned, instead left to be inferred by the reader. A model that understands narratives should likewise infer these implicit states, and even reason about the impact of changes to these states on the narrative. To facilitate this goal, we introduce a new crowdsourced English-language, Participant States dataset, PASTA. This dataset contains inferable participant states; a counterfactual perturbation to each state; and the changes to the story that would be necessary if the counterfactual were true. We introduce three state-based reasoning tasks that test for the ability to infer when a state is entailed by a story, to revise a story conditioned on a counterfactual state, and to explain the most likely state change given a revised story. Experiments show that today’s LLMs can reason about states to some degree, but there is large room for improvement, especially in problems requiring access and ability to reason with diverse types of knowledge (e.g. physical, numerical, factual).more » « less
-
Navigation among dynamic obstacles is a fundamental task in robotics that has been modeled in various ways. In Safe Interval Path Planning, location is discretized to a grid, time is continuous, future trajectories of obstacles are assumed known, and planning takes place offline. In this work, we define the Real-time Safe Interval Path Planning problem setting, in which the agent plans online and must issue its next action within a strict time bound. Unlike in classical real-time heuristic search, the cost-to-go in Real-time Safe Interval Path Planning is a function of time rather than a scalar. We present several algorithms for this setting and prove that they learn admissible heuristics. Empirical evaluation shows that the new methods perform better than classical approaches under a variety of conditions.more » « less
-
Zhang, Jie; Chen, Li; Berkovsky, Shlomo; Zhang, Min; Noia, Tommaso di; Basilico, Justin; Pizzato, Luiz; Song, Yang (Ed.)Narrative-driven recommendation (NDR) presents an information access problem where users solicit recommendations with verbose descriptions of their preferences and context, for example, travelers soliciting recommendations for points of interest while describing their likes/dislikes and travel circumstances. These requests are increasingly important with the rise of natural language-based conversational interfaces for search and recommendation systems. However, NDR lacks abundant training data for models, and current platforms commonly do not support these requests. Fortunately, classical user-item interaction datasets contain rich textual data, e.g., reviews, which often describe user preferences and context – this may be used to bootstrap training for NDR models. In this work, we explore using large language models (LLMs) for data augmentation to train NDR models. We use LLMs for authoring synthetic narrative queries from user-item interactions with few-shot prompting and train retrieval models for NDR on synthetic queries and user-item interaction data. Our experiments demonstrate that this is an effective strategy for training small-parameter retrieval models that outperform other retrieval and LLM baselines for narrative-driven recommendation.more » « less
An official website of the United States government
