%ALeite, Abe%AIzquierdo, Eduardo%ACejkova, Jitka Ed.%AHoller, Silvia Ed.%ASoros, Lisa Ed.%AWitkowski, Olaf Ed.%D2021%I %K %MOSTI ID: 10286536 %PMedium: X %TGenerating reward structures on a parameterized distribution of dynamics tasks %XIn order to make lifelike, versatile learning adaptive in the artificial domain, one needs a very diverse set of behaviors to learn. We propose a parameterized distribution of classic control-style tasks with minimal information shared between tasks. We discuss what makes a task trivial and offer a basic metric, time in convergence, that measures triviality. We then investigate analytic and empirical approaches to generating reward structures for tasks based on their dynamics in order to minimize triviality. Contrary to our expectations, populations evolved on reward structures that incentivized the most stable locations in state space spend the least time in convergence as we have defined it, because of the outsized importance our metric assigns to behavior fine-tuning in these contexts. This work paves the way towards an understanding of which task distributions enable the development of learning. Country unknown/Code not availablehttps://doi.org/10.1162/isal_a_00466OSTI-MSA