skip to main content


Search for: All records

Creators/Authors contains: "Candadai, Madhavun"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    Abstract Behavior involves the ongoing interaction between an organism and its environment. One of the prevailing theories of adaptive behavior is that organisms are constantly making predictions about their future environmental stimuli. However, how they acquire that predictive information is still poorly understood. Two complementary mechanisms have been proposed: predictions are generated from an agent’s internal model of the world or predictions are extracted directly from the environmental stimulus. In this work, we demonstrate that predictive information, measured using bivariate mutual information, cannot distinguish between these two kinds of systems. Furthermore, we show that predictive information cannot distinguish between organisms that are adapted to their environments and random dynamical systems exposed to the same environment. To understand the role of predictive information in adaptive behavior, we need to be able to identify where it is generated. To do this, we decompose information transfer across the different components of the organism-environment system and track the flow of information in the system over time. To validate the proposed framework, we examined it on a set of computational models of idealized agent-environment systems. Analysis of the systems revealed three key insights. First, predictive information, when sourced from the environment, can be reflected in any agent irrespective of its ability to perform a task. Second, predictive information, when sourced from the nervous system, requires special dynamics acquired during the process of adapting to the environment. Third, the magnitude of predictive information in a system can be different for the same task if the environmental structure changes. 
    more » « less
  2. Living organisms learn on multiple time scales: evolutionary as well as individual-lifetime learning. These two learning modes are complementary: the innate phenotypes developed through evolution significantly influence lifetime learning. However, it is still unclear how these two learning methods interact and whether there is a benefit to part of the system being optimized on a different time scale using a population-based approach while the rest of it is trained on a different time-scale using an individualistic learning algorithm. In this work, we study the benefits of such a hybrid approach using an actor-critic framework where the critic part of an agent is optimized over evolutionary time based on its ability to train the actor part of an agent during its lifetime. Typically, critics are optimized on the same time-scale as the actor using the Bellman equation to represent long-term expected reward. We show that evolution can find a variety of different solutions that can still enable an actor to learn to perform a behavior during its lifetime. We also show that although the solutions found by evolution represent different functions, they all provide similar training signals during the lifetime. This suggests that learning on multiple time-scales can effectively simplify the overall optimization process in the actor-critic framework by finding one of many solutions that can still train an actor just as well. Furthermore, analysis of the evolved critics can yield additional possibilities for reinforcement learning beyond the Bellman equation. 
    more » « less
  3. Artificial Life has a long tradition of studying the interaction between learning and evolution. And, thanks to the increase in the use of individual learning techniques in Artificial Intelligence, there has been a recent revival of work combining individual and evolutionary learning. Despite the breadth of work in this area, the exact trade-offs between these two forms of learning remain unclear. In this work, we systematically examine the effect of task difficulty, the individual learning approach, and the form of inheritance on the performance of the population across different combinations of learning and evolution. We analyze in depth the conditions in which hybrid strategies that combine lifetime and evolutionary learning outperform either lifetime or evolutionary learning in isolation. We also discuss the importance of these results in both a biological and algorithmic context. 
    more » « less
  4. Living organisms perform multiple tasks, often using the same or shared neural networks. Such multifunctional neural networks are composed of neurons that contribute to different degrees in the different behaviors. In this work, we take a computational modeling approach to evaluate the extent to which neural resources are specialized or shared across different behaviors. To this end, we develop multifunctional feed-forward neural networks that are capable of performing three control tasks: inverted pendulum, cartpole balancing and single-legged walker. We then perform information lesions of individual neurons to determine their contribution to each task. Following that, we investigate the ability of two commonly used methods to estimate a neuron's contribution from its activity: neural variability and mutual information. Our study reveals the following: First, the same feed-forward neural network is capable of reusing its hidden layer neurons to perform multiple behaviors; second, information lesions reveal that the same behaviors are performed with different levels of reuse in different neural networks; and finally, mutual information is a better estimator of a neuron's contribution to a task than neural variability. 
    more » « less