skip to main content


Title: Intentional computational level design
The procedural generation of levels and content in video games is a challenging AI problem. Often such generation relies on an intelligent way of evaluating the content being generated so that constraints are satisfied and/or objectives maximized. In this work, we address the problem of creating levels that are not only playable but also revolve around specific mechanics in the game. We use constrained evolutionary algorithms and quality-diversity algorithms to generate small sections of Super Mario Bros levels called scenes, using three different simulation approaches: Limited Agents, Punishing Model, and Mechanics Dimensions. All three approaches are able to create scenes that give opportunity for a player to encounter or use targeted mechanics with different properties. We conclude by discussing the advantages and disadvantages of each approach and compare them to each other.  more » « less
Award ID(s):
1717324
NSF-PAR ID:
10132527
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
GECCO '19: Proceedings of the Genetic and Evolutionary Computation Conference
Page Range / eLocation ID:
796 to 803
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Video game tutorials allow players to gain mastery over game skills and mechanics. To hone players’ skills, it is beneficial from practicing in environments that promote individ- ual player skill sets. However, automatically generating environ- ments which are mechanically similar to one-another is a non- trivial problem. This paper presents a level generation method for Super Mario by stitching together pre-generated “scenes” that contain specific mechanics, using mechanic-sequences from agent playthroughs as input specifications. Given a sequence of mechanics, the proposed system uses an FI-2Pop algorithm and a corpus of scenes to perform automated level authoring. The proposed system outputs levels that can be beaten using a similar mechanical sequence to the target mechanic sequence but with a different playthrough experience. We compare the proposed system to a greedy method that selects scenes that maximize the number of matched mechanics. Unlike the greedy approach, the proposed system is able to maximize the number of matched mechanics while reducing emergent mechanics using the stitching process. 
    more » « less
  2. Abstract

    Efficient rendering of photo‐realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo‐realistic images from hand‐crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo‐realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state‐of‐the‐art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photorealistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. Specifically, our emphasis is on the type of control, i.e., how the control is provided, which parts of the pipeline are learned, explicit vs. implicit control, generalization, and stochastic vs. deterministic synthesis. The second half of this state‐of‐the‐art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free‐viewpoint video, and the creation of photo‐realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems.

     
    more » « less
  3. Abstract

    Realistic 3D indoor scene datasets have enabled significant recent progress in computer vision, scene understanding, autonomous navigation, and 3D reconstruction. But the scale, diversity, and customizability of existing datasets is limited, and it is time‐consuming and expensive to scan and annotate more. Fortunately, combinatorics is on our side: there are enough individualroomsin existing 3D scene datasets, if there was but a way to recombine them into new layouts. In this paper, we propose the task of generating novel 3D floor plans from existing 3D rooms. We identify three sub‐tasks of this problem: generation of 2D layout, retrieval of compatible 3D rooms, and deformation of 3D rooms to fit the layout. We then discuss different strategies for solving the problem, and design two representative pipelines: one uses available 2D floor plans to guide selection and deformation of 3D rooms; the other learns to retrieve a set of compatible 3D rooms and combine them into novel layouts. We design a set of metrics that evaluate the generated results with respect to each of the three subtasks and show that different methods trade off performance on these subtasks. Finally, we survey downstream tasks that benefit from generated 3D scenes and discuss strategies in selecting the methods most appropriate for the demands of these tasks.

     
    more » « less
  4. This WIP presentation is intended to share and gather feedback on the development of an observation protocol for K-12 integrated STEM instruction, the STEM-OP. Specifically, the STEM-OP is being developed for use in K-12 science and/or engineering settings where integrated STEM instruction takes place. While the importance of integrated STEM education is established through national policy documents, there remains disagreement on models and effective approaches for integrated STEM instruction. Our broad definition of integrated STEM includes the use of two or more STEM disciplines to solve a real-world problem or design challenge that supports student development of 21st century skills. This issue is confounded by the lack of observation protocols sensitive to integrated STEM teaching and learning that can be used to inform research of the effectiveness of new models and strategies. Existing instruments most commonly used by researchers, such as the Reformed Teaching Observation Protocol (RTOP), were designed prior to the development of the Next Generation Science Standards and the integration of engineering into science standards. These instruments were also designed for use in reform-based science classrooms, not engineering or integrated STEM learning environments. While engineering-focused observation protocols do exist for K-12 classrooms, they do not evaluate beyond an engineering focus, making them limited tools to evaluate integrated STEM instruction. In order to facilitate the implementation of integrated STEM in K-12 classrooms and the development of the nascent integrated STEM education literature, our research team is developing a new integrated STEM observation protocol for use in K-12 science and engineering classrooms. This valid and reliable instrument will be designed for use in a variety of educational contexts and by different education stakeholders to increase the quality of K-12 STEM education. At the end of this project, the STEM-OP will be made available through an online platform that will include an embedded training program to facilitate its broad use. In the first year of this four-year project, we are working on the initial development of the STEM-OP through video analysis and exploratory factor analysis. We are utilizing existing classroom video from a previous project with approximately 2,000 unique classroom videos representing a variety of grade levels (4-9), science content (life, earth, and physical science), engineering design challenges, and school demographics (urban, suburban). The development of the STEM-OP is guided by published frameworks that focus on providing quality K-12 integrated STEM and engineering education, such as the Framework for Quality K-12 Engineering Education. Our anticipated results at the time the ASEE meeting will include a review of our item development process and finalized items included on the draft STEM-OP. Additionally, we anticipate being able to share findings from the exploratory factor analysis (EFA) on our video-coded data, which will identify distinct instructional dimensions responsible for integrated STEM instruction. We value the opportunity to gather feedback from the engineering education community as the integration of engineering design and practices is integral to quality integrated STEM instruction. 
    more » « less
  5. Nonoverlapping sequential pattern mining is an important type of sequential pattern mining (SPM) with gap constraints, which not only can reveal interesting patterns to users but also can effectively reduce the search space using the Apriori (anti-monotonicity) property. However, the existing algorithms do not focus on attributes of interest to users, meaning that existing methods may discover many frequent patterns that are redundant. To solve this problem, this article proposes a task called nonoverlapping three-way sequential pattern (NTP) mining, where attributes are categorized according to three levels of interest: strong, medium, and weak interest. NTP mining can effectively avoid mining redundant patterns since the NTPs are composed of strong and medium interest items. Moreover, NTPs can avoid serious deviations (the occurrence is significantly different from its pattern) since gap constraints cannot match with strong interest patterns. To mine NTPs, an effective algorithm is put forward, called NTP-Miner, which applies two main steps: support (frequency occurrence) calculation and candidate pattern generation. To calculate the support of an NTP, depth-first and backtracking strategies are adopted, which do not require creating a whole Nettree structure, meaning that many redundant nodes and parent–child relationships do not need to be created. Hence, time and space efficiency is improved. To generate candidate patterns while reducing their number, NTP-Miner employs a pattern join strategy and only mines patterns of strong and medium interest. Experimental results on stock market and protein datasets show that NTP-Miner not only is more efficient than other competitive approaches but can also help users find more valuable patterns. More importantly, NTP mining has achieved better performance than other competitive methods in clustering tasks. Algorithms and data are available at: https://github.com/wuc567/Pattern-Mining/tree/master/NTP-Miner . 
    more » « less