skip to main content


Title: Multi-Context Generation in Virtual Reality Environments using Deep Reinforcement Learning
In this work, a Deep Reinforcement Learning (RL) approach is proposed for Procedural Content Generation (PCG) that seeks to automate the generation of multiple related virtual reality (VR) environments for enhanced personalized learning. This allows for the user to be exposed to multiple virtual scenarios that demonstrate a consistent theme, which is especially valuable in an educational context. RL approaches to PCG offer the advantage of not requiring training data, as opposed to other PCG approaches that employ supervised learning approaches. This work advances the state of the art in RL-based PCG by demonstrating the ability to generate a diversity of contexts in order to teach the same underlying concept. A case study is presented that demonstrates the feasibility of the proposed RL-based PCG method using examples of probability distributions in both manufacturing facility and grocery store virtual environments. The method demonstrated in this paper has the potential to enable the automatic generation of a variety of virtual environments that are connected by a common concept or theme.  more » « less
Award ID(s):
1834465
NSF-PAR ID:
10186601
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
ASME IDETC-CIE
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract This work presents a deep reinforcement learning (DRL) approach for procedural content generation (PCG) to automatically generate three-dimensional (3D) virtual environments that users can interact with. The primary objective of PCG methods is to algorithmically generate new content in order to improve user experience. Researchers have started exploring the use of machine learning (ML) methods to generate content. However, these approaches frequently implement supervised ML algorithms that require initial datasets to train their generative models. In contrast, RL algorithms do not require training data to be collected a priori since they take advantage of simulation to train their models. Considering the advantages of RL algorithms, this work presents a method that generates new 3D virtual environments by training an RL agent using a 3D simulation platform. This work extends the authors’ previous work and presents the results of a case study that supports the capability of the proposed method to generate new 3D virtual environments. The ability to automatically generate new content has the potential to maintain users’ engagement in a wide variety of applications such as virtual reality applications for education and training, and engineering conceptual design. 
    more » « less
  2. This work presents a Procedural Content Generation (PCG) method based on a Neural Network Reinforcement Learning (RL) approach that generates new environments for Virtual Reality (VR) learning applications. The primary objective of PCG methods is to algorithmically generate new content (e.g., environments, levels) in order to improve user experience. Researchers have started exploring the integration of Machine Learning (ML) algorithms into their PCG methods. These ML approaches help explore the design space and generate new content more efficiently. The capability to provide users with new content has great potential for learning applications. However, these ML algorithms require large datasets to train their generative models. In contrast, RL based methods do not require any training data to be collected a priori since they take advantage of simulation to train their models. Moreover, even though VR has become an emerging technology to engage users, there have been few studies that explore PCG for learning purposes and fewer in the context of VR. Considering these limitations, this work presents a method that generates new VR environments by training an RL in a simulation platform. This PCG method has the potential to maintain users’ engagement over time by presenting them with new environments in VR learning applications. 
    more » « less
  3. Due to repetitive trial-and-error style interactions between agents and a fixed traffic environment during the policy learning, existing Reinforcement Learning (RL)-based Traffic Signal Control (TSC) methods greatly suffer from long RL training time and poor adaptability of RL agents to other complex traffic environments. To address these problems, we propose a novel Adversarial Inverse Reinforcement Learning (AIRL)-based pre-training method named InitLight, which enables effective initial model generation for TSC agents. Unlike traditional RL-based TSC approaches that train a large number of agents simultaneously for a specific multi-intersection environment, InitLight pretrains only one single initial model based on multiple single-intersection environments together with their expert trajectories. Since the reward function learned by InitLight can recover ground-truth TSC rewards for different intersections at optimality, the pre-trained agent can be deployed at intersections of any traffic environments as initial models to accelerate subsequent overall global RL training. Comprehensive experimental results show that, the initial model generated by InitLight can not only significantly accelerate the convergence with much fewer episodes, but also own superior generalization ability to accommodate various kinds of complex traffic environments. 
    more » « less
  4. Circuit linearity calibration can represent a set of high-dimensional search problems if the observability is limited. For example, linearity calibration of digital-to-time converters (DTC), an essential building block of modern digital phaselocked loops (DPLLs), is an example of a high-dimensional search problem as difficulty of measuring ps delays hinders prior methods that calibrate stage by stage. And, a calibrated DTC can become nonlinear again due to changes in temperature (T) and power supply voltage (V). Prior work reports a deep reinforcement learning framework that is capable of performing DTC linearity calibration with nonlinear calibration banks; however, this prior work does not address maintaining calibration in the face of temperature and supply voltage variations. In this paper, we present a meta-reinforcement learning (RL) method that can enable the RL agent to quickly adapt to a new environment when the temperature and/or voltage change. Inspired by the Style Generative Adversarial Networks (StyleGANs), we propose to treat temperature and voltage changes as the styles of the circuits. In contrast to traditional methods employing circuit sensors to detect changes in T and V, we utilize a machine learning (ML) sensor, to implicitly infer a wide range of environmental changes. The style information from the ML sensor is subsequently injected into a small portion of the policy network, modulating its weights. As a proof of concept, we first designed a 5-bit DTC at the normal voltage (1V) and normal temperature (27℃) corner (NVNT) as the environment. The RL agent begins its training in the NVNT environment. Following this initial phase, the agent is then tasked with adapting to environments with different temperature and supply voltages. Our results show that the proposed technique can reduce the Integral Non-Linearity (INL) to less than 0.5 LSB within 10, 000 search steps in a changed environment. Compared to starting learning from a random initialized policy and a trained policy, the proposed meta-RL approach takes 63% and 47% fewer steps to complete the linearity calibration, respectively. Our method is also applicable to the calibration of many other kinds of analog and RF circuits. 
    more » « less
  5. Introduction: The work reported here subscribes to the idea that the best way to learn - and thus, improve student educational outcomes - is through solving problems, yet recognizes that engineering students are generally provided insufficient opportunities to engage problems as they will be engaged in practice. Attempts to incorporate more open-ended, ill-structured experiences have increased but are challenging for faculty to implement because there are no systematic methods or approaches that support the educator in designing these learning experiences. Instead, faculty often start from the anchor of domain-specific concepts, an anchoring that is further reinforced by available textbook problems that are rarely open in nature. Open-ended problems are then created in ad-hoc ways, and in doing so, the problem-solving experience is often not realized as the instructor intended. Approach: The focus in this work is the development and preliminary implementation of a reflective approach to support instructors in examining the design intent of problem experiences. The reflective method combines concept mapping as developed by Joseph Novak with the work of David Jonassen and his characterization of problems and the forms of knowledge required to solve them. Results: We report on the development of a standard approach – a template -- for concept mapping of problems. As a demonstration, we applied the approach to a relatively simple, well-structured problem used in an introductory aerospace engineering course. Educator-created concept maps provided a visual medium for examining the connectivity of problem elements and forms of knowledge. Educator reflection after looking at and discussing the concept map revealed ways in which the problem engagement may differ from the perceived design intent. Implications: We consider the potential for the proposed method to support design and facilitation activities in problem-based learning (PBL) environments. We explore broader implications of the approach as it relates to 1) facilitating a priori faculty insights regarding student navigation of problem solving, 2) instructor reflection on problem design and facilitation, and 3) supporting problem design and facilitation. Additionally, we highlight important issues to be further investigated toward quantifying the value and limitations of the proposed approach. 
    more » « less