skip to main content


Title: Automatic generation of interactive NPC scripts for a mixed-reality integrated learning environment
The Mixed-Reality Integrated Learning Environment (MILE) developed at Florida State University is a virtual reality based, inclusive and immersive e-learning environment that promotes engaging and effective learning interactions for a diversified learner population. MILE uses a large number of interactive Non-Player Characters (NPCs) to represent diverse research-based learner archetypes and groups, and to prompt and provide feedback for in situ teaching practice. The NPC scripts in MILE are written in Linden Scripting Language (LSL), and can be quite complex, creating a significant challenge in the development and maintenance of the system. To address this challenge, we develop NPC_GEN, an automatic NPC script generation tool that takes high-level NPC descriptions as input and automatically produces LSL scripts for NPCs. In this work, we introduce NPCDL, a language that we design for NPC_GEN to give high-level descriptions of NPCs, describe how NPC_GEN translates an NPCDL description into an LSL script, and report a user study of NPC_GEN. The results of our user study indicate that with minimal training, non-technical people are able to write and modify NPCDL descriptions, which can then be used to generate LSL scripts for the NPCs: the development and maintenance of NPCs is greatly simplified with NPC_GEN.  more » « less
Award ID(s):
1632965
NSF-PAR ID:
10097029
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
the International Conference Proceedings Series by ACM, 2018 10th International Conference on Education Technology and Computers (ICETC 2018) Proceedings Series by ACM (ISBN: 978-1-4503-6517-8)
Page Range / eLocation ID:
74-79
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Creating engaging interactive story-based experiences dynamically responding to individual player choices poses significant challenges for narrative-centered games. Recent advances in pre-trained large language models (LLMs) have the potential to revolutionize procedural content generation for narrative-centered games. Historically, interactive narrative generation has specified pivotal events in the storyline, often utilizing planning-based approaches toward achieving narrative coherence and maintaining the story arc. However, manual authorship is typically used to create detail and variety in non-player character (NPC) interaction to specify and instantiate plot events. This paper proposes SCENECRAFT, a narrative scene generation framework that automates NPC interaction crucial to unfolding plot events. SCENECRAFT interprets natural language instructions about scene objectives, NPC traits, location, and narrative variations. It then employs large language models to generate game scenes aligned with authorial intent. It generates branching conversation paths that adapt to player choices while adhering to the author’s interaction goals. LLMs generate interaction scripts, semantically extract character emotions and gestures to align with the script, and convert dialogues into a game scripting language. The generated script can then be played utilizing an existing narrative-centered game framework. Through empirical evaluation using automated and human assessments, we demonstrate SCENECRAFT’s effectiveness in creating narrative experiences based on creativity, adaptability, and alignment with intended author instructions.

     
    more » « less
  2. null (Ed.)
    This paper presents a holistic system to scale up the teaching and learning of vocabulary words of American Sign Language (ASL). The system leverages the most recent mixed-reality technology to allow the user to perceive her own hands in an immersive learning environment with first- and third-person views for motion demonstration and practice. Precise motion sensing is used to record and evaluate motion, providing real-time feedback tailored to the specific learner. As part of this evaluation, learner motions are matched to features derived from the Hamburg Notation System (HNS) developed by sign-language linguists. We develop a prototype to evaluate the efficacy of mixed-reality-based interactive motion teaching. Results with 60 participants show a statistically significant improvement in learning ASL signs when using our system, in comparison to traditional desktop-based, non-interactive learning. We expect this approach to ultimately allow teaching and guided practice of thousands of signs. 
    more » « less
  3. Abstract. Game-based learning environments (GBLEs) are being increasingly utilized in education and training to enhance and encourage engagement and learning. This study investigated how students, who were afforded varying levels of autonomy, interacted with two types of informational text presentations (e.g., non-player character (NPC) instances, traditional informational text) while problem solving with CRYSTAL ISLAND (CI), a GBLE, and their effect on overall learning by examining eye-tracking and performance data. Ninety undergraduate students were randomly assigned to two conditions, full and partial agency, which varied in the amount of autonomy students were granted to explore CI and interactive game elements (i.e., reading informational text, scanning food items). Within CI, informational text is presented in a traditional format, where there are large chunks of text presented at a single time represented as books and research articles, as well as in the form of participant conversation with NPCs throughout the environment. Results indicated significantly greater proportional learning gain (PLG) for participants in the partial agency condition than in the full agency condition. Additionally, longer participant fixations on traditionally presented informational text positively predicted participant PLG. Fixation durations were significantly longer in the partial agency condition than the full agency condition. However, the combination of visual and verbal text represented by NPCs were not significant predictors of PLGs and do not differ across conditions. 
    more » « less
  4. n recent years, we have seen the success of network representation learning (NRL) methods in diverse domains ranging from com- putational chemistry to drug discovery and from social network analysis to bioinformatics algorithms. However, each such NRL method is typically prototyped in a programming environment familiar to the developer. Moreover, such methods rarely scale out to large-scale networks or graphs. Such restrictions are problematic to domain scientists or end-users who want to scale a particular NRL method-of-interest on large graphs from their specific domain. In this work, we present a novel system, WebMILE to democ- ratize this process. WebMILE can scale an unsupervised network embedding method written in the user’s preferred programming language on large graphs. It provides an easy-to-use Graphical User Interface (GUI) for the end-user. The user provides the necessary in- put (embedding method file, graph, required packages information) through a simple GUI, and WebMILE executes the input network embedding method on the given input graph. WebMILE leverages a pioneering multi-level method, MILE (alternatively DistMILE if the user has access to a cluster), that can scale a network embed- ding method on large graphs. The language agnosticity is achieved through a simple Docker interface. In this demonstration, we will showcase how a domain scientist or end-user can utilize WebMILE to rapidly prototype and learn node embeddings of a large graph in a flexible and efficient manner - ensuring the twin goals of high productivity and high performance. 
    more » « less
  5. Given the strategic importance of the semiconductor manufacturing sector and the CHIPS Act impact on microelectronics, it is more imperative than ever to train the next generation of scientists and engineers in the field. However, this is a challenging feat since nanofabrication education uses hands-on cleanroom facilities. Since cleanrooms are expensive, have access constraints due to safety concerns, and offer limited instructional space, class sizes and outreach events are limited. To complement instruction in nanotechnology education, there is some open- or educational-access software, which is computer-based and focuses only on training for individual equipment, not on the typical workflow for device fabrication. The objective of this work was to develop an accessible virtual reality ecosystem that provides an immersive education and outreach on device nanofabrication that is user-friendly for a broad range of audiences. At the George Washington University (GWU), a virtual reality cleanroom prototype has been developed. It consists of a 45-minute gameplay module that covers the process flow for the fabrication of micro-scale resistors, from sample preparation to electrical characterization. We also performed a mixed methods study to investigate how 5 students in a nanoelectronics course utilized this virtual reality cleanroom prototype and what changes they recommend to improve its user interface and learner experience. The study population for this work-in-progress consisted of students enrolled in a nanoelectronics course at GWU during the 2022-2023 school year. Students taking this course can be undergraduate (junior or senior) or graduate (masters or PhD). The research questions for this study were 1) what is the user experience with the virtual reality cleanroom prototype, 2) what challenges, if any, did students experience, and 3) what changes did students recommend to improve the virtual reality cleanroom prototype learner experience? Preliminary results indicate that the students found the virtual reality cleanroom simulator helpful in repeatedly exploring the cleanroom space and the nanofabrication process flow in a safe way, thus developing more confidence in utilizing the actual cleanroom facility. The results of this study will provide insight on the design of future modules with more complicated levels and device process flows. Moreover, the study could inform the development of other virtual reality simulators for other lab activities. The improved usability of the proposed software could provide students in large classes or attending online programs in electrical and computer engineering, as well as K-12 students participating in nanotechnology-related outreach events, the opportunity to conduct realistic process workflows, learn first-hand about nanofabrication, and practice using a nanofabrication lab via trial and error in a safe virtual environment. 
    more » « less