We introduce the General Video Game Rule Gen- eration problem, and the eponymous software framework which will be used in a new track of the General Video Game AI (GVGAI) competition. The problem is, given a game level as input, to generate the rules of a game that fits that level. This can be seen as the inverse of the General Video Game Level Generation problem. Conceptualizing these two problems as separate helps breaking the very hard problem of generating complete games into smaller, more manageable subproblems. The proposed framework builds on the GVGAI software and thus asks the rule generator for rules defined in the Video Game Description Language. We describe the API, and three different rule generators: a random, a constructive and a search- based generator. Early results indicate that the constructive generator generates playable and somewhat interesting game rules but has a limited expressive range, whereas the search- based generator generates remarkably diverse rulesets, but with an uneven quality. 
                        more » 
                        « less   
                    
                            
                            Multi-Objective level generator generation with Marahel
                        
                    
    
            This paper introduces a new system to design constructive level generators by searching the space of constructive level generators defined by Marahel language. We use NSGA-II, a multi-objective optimization algorithm, to search for generators for three different problems (Binary, Zelda, and Sokoban). We restrict the represen- tation to a subset of Marahel language to push the evolution to find more efficient generators. The results show that the generated generators were able to achieve good performance on most of the fitness functions over these three problems. However, on Zelda and Sokoban they tend to depend on the initial state than modifying the map. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1717324
- PAR ID:
- 10231888
- Date Published:
- Journal Name:
- International Conference on the Foundations of Digital Games
- Page Range / eLocation ID:
- 1 to 8
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            For a complexity class $$C$$ and language $$L$$, a constructive separation of $$L\notin C$$ gives an efficient algorithm (also called a refuter) to findcounterexamples (bad inputs) for every $$C$$-algorithm attempting to decide $$L$$.We study the questions: Which lower bounds can be made constructive? What arethe consequences of constructive separations? We build a case thatconstructiveness serves as a dividing line between many weak lower bounds weknow how to prove, and strong lower bounds against $$P$$, $ZPP$, and $BPP$. Putanother way, constructiveness is the opposite of a complexity barrier: it is aproperty we want lower bounds to have. Our results fall into three broadcategories. 1. Our first set of results shows that, for many well-known lower boundsagainst streaming algorithms, one-tape Turing machines, and query complexity,as well as lower bounds for the Minimum Circuit Size Problem, making theselower bounds constructive would imply breakthrough separations ranging from$$EXP \neq BPP$$ to even $$P \neq NP$$. 2. Our second set of results shows that for most major open problems in lowerbounds against $$P$$, $ZPP$, and $BPP$, including $$P \neq NP$$, $$P \neq PSPACE$$,$$P \neq PP$$, $$ZPP \neq EXP$$, and $$BPP \neq NEXP$$, any proof of the separationwould further imply a constructive separation. Our results generalize earlierresults for $$P \neq NP$$ [Gutfreund, Shaltiel, and Ta-Shma, CCC 2005] and $$BPP\neq NEXP$$ [Dolev, Fandina and Gutfreund, CIAC 2013]. 3. Our third set of results shows that certain complexity separations cannotbe made constructive. We observe that for all super-polynomially growingfunctions $$t$$, there are no constructive separations for detecting high$$t$$-time Kolmogorov complexity (a task which is known to be not in $$P$$) fromany complexity class, unconditionally.more » « less
- 
            There are a wide array of methods for writing code generators. We advocate for a point in the design space, which we call *metaprogramming with combinators*, where programmers use (and write) combinator libraries that directly manipulate object language terms. The key language feature that makes this style of programming palatable is quasiquotation. Our approach leverages quasiquotation and other host language features to provide what is essentially a rich, well-typed macro language. Unlike other approaches, metaprogramming with combinators allows full control over generated code, thereby also providing full control over performance and resource usage. This control does not require sacrificing the ability to write high-level abstractions. We demonstrate metaprogramming with combinators through several code generators written in Haskell that produce VHDL targeted to Xilinx FPGAs.more » « less
- 
            Abstract A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small overhead in time (the AM vs. NP problem). Both questions are intricately tied to lower bounds. Prominently, in both settingsblackboxderandomization, i.e., derandomization through pseudorandom generators, has been shown equivalent to lower bounds for decision problems against circuits.Recently, Chen and Tell (FOCS'21) established nearequivalences in the BPP setting betweenwhiteboxderandomization and lower bounds for multi-bit functions against algorithms on almost-all inputs. The key ingredient is a technique to translate hardness into targeted hitting sets in an instance-wise fashion based on a layered arithmetization of the evaluation of a uniform circuit computing the hard function$$f$$ on the given instance. Follow-up works managed to obtain full equivalences in the BPP setting by exploiting acompressionproperty of classical pseudorandom generator constructions. In particular, Chen, Tell, and Williams (FOCS'23) showed that derandomization of BPP is equivalent toconstructivelower bounds against algorithms that go through a compression phase.In this paper, we develop a corresponding technique for Arthur-Merlin protocols and establish similar near-equivalences in the AM setting. As an example of our results in the hardness-to-derandomization direction, consider a length-preserving function$$f$$ computable by a nondeterministic algorithm that runs in time$$n^a$$ . We show that if every Arthur-Merlin protocol that runs in time$$n^c$$ for$$c=O(\log^2 a)$$ can only compute$$f$$ correctly on finitely many inputs, then AM is in NP. We also obtain equivalences between constructive lower bounds against Arthur-Merlin protocols that go through a compression phase and derandomization of AM viatargetedgenerators. Our main technical contribution is the construction of suitable targeted hitting-set generators based on probabilistically checkable proofs of proximity for nondeterministic computations. As a by-product of our constructions, we obtain the first result indicating that whitebox derandomization of AM may be equivalent to the existence of targeted hitting-set generators for AM, an issue raised by Goldreich (LNCS, 2011). By-products in the average-case setting include the first uniform hardness vs. randomness trade-offs for AM, as well as an unconditional mild derandomization result for AM.more » « less
- 
            Recent advances in the capacity of large language models to generate human-like text have resulted in their increased adoption in user-facing settings. In parallel, these improvements have prompted a heated discourse around the risks of societal harms they introduce, whether inadvertent or malicious. Several studies have explored these harms and called for their mitigation via development of safer, fairer models. Going beyond enumerating the risks of harms, this work provides a survey of practical methods for addressing potential threats and societal harms from language generation models. We draw on several prior works’ taxonomies of language model risks to present a structured overview of strategies for detecting and ameliorating different kinds of risks/harms of language generators. Bridging diverse strands of research, this survey aims to serve as a practical guide for both LM researchers and practitioners, with explanations of different strategies’ motivations, their limitations, and open problems for future research.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    