skip to main content


Title: Dynamic representation of 3D auditory space in the midbrain of the free-flying echolocating bat
Essential to spatial orientation in the natural environment is a dynamic representation of direction and distance to objects. Despite the importance of 3D spatial localization to parse objects in the environment and to guide movement, most neurophysiological investigations of sensory mapping have been limited to studies of restrained subjects, tested with 2D, artificial stimuli. Here, we show for the first time that sensory neurons in the midbrain superior colliculus (SC) of the free-flying echolocating bat encode 3D egocentric space, and that the bat’s inspection of objects in the physical environment sharpens tuning of single neurons, and shifts peak responses to represent closer distances. These findings emerged from wireless neural recordings in free-flying bats, in combination with an echo model that computes the animal’s instantaneous stimulus space. Our research reveals dynamic 3D space coding in a freely moving mammal engaged in a real-world navigation task.  more » « less
Award ID(s):
1734744
NSF-PAR ID:
10104465
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
eLife
Volume:
7
ISSN:
2050-084X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    A diversity of defence colourations that shift over time provides protection against natural enemies. Adaptations for camouflage depend on an organism’s interactions with the natural environment (predators, habitat), which can change ontogenetically. Wallace’s flying frogs (Rhacophorus nigropalmatus) are cryptic emerald green in their adult life stage, but juveniles are bright red and develop white spots on their back 1 month after metamorphosis. This latter conspicuous visual appearance might function as antipredator strategy, where frogs masquerade as bird or bat droppings so that predators misidentified them as inedible objects. To test this idea, we created different paraffin wax frog models—red with white spots, red without white spots, green, and unpainted—and placed them in equal numbers within a > 800 m2rainforest house at the Vienna Zoo. This environment closely resembles the Bornean rainforest and includes several free-living avian predators of frogs. We observed an overall hit rate of 15.5%. A visual model showed that the contrast of red, green and control models against the background colouration could be discriminated by avian predators, whereas green models had less chromatic difference than red morphs. The attack rate was significantly greater for red but was reduced by half when red models had white spots. The data therefore supports the hypothesis that the juvenile colouration likely acts as a masquerade strategy, disguising frogs as animal droppings which provides similar protection as the cryptic green adult colour. We discuss the ontogenetic colour change as a possible antipredator strategy in relation to the different habitats used at different life stages.

    Significance statement

    Predation pressure and the evolution of antipredator strategies site at the cornerstone of animal-behaviour research. Effective antipredator strategies can change in response to different habitats that animals use during different life stages. We study ontogenetic shifts in colour change as dynamic antipredator strategy in juvenile and adult Wallace’s flying frogs. We show that the unusual colour pattern of juveniles (bright red with small white spots) likely functions as a masquerade of animal droppings. Specifically, we show that white dotting, which can be associated with animal faeces, acts as the main visual feature that turns an otherwise highly conspicuous individual into a surprisingly camouflaged one. To our knowledge, this is the first experimental exploration of a vertebrate masquerading as animal droppings.

     
    more » « less
  2. null (Ed.)
    Neurophysiological recordings in behaving rodents demonstrate neuronal response properties that may code space and time for episodic memory and goal-directed behaviour. Here, we review recordings from hippocampus, entorhinal cortex, and retrosplenial cortex to address the problem of how neurons encode multiple overlapping spatiotemporal trajectories and disambiguate these for accurate memory-guided behaviour. The solution could involve neurons in the entorhinal cortex and hippocampus that show mixed selectivity, coding both time and location. Some grid cells and place cells that code space also respond selectively as time cells, allowing differentiation of time intervals when a rat runs in the same location during a delay period. Cells in these regions also develop new representations that differentially code the context of prior or future behaviour allowing disambiguation of overlapping trajectories. Spiking activity is also modulated by running speed and head direction, supporting the coding of episodic memory not as a series of snapshots but as a trajectory that can also be distinguished on the basis of speed and direction. Recent data also address the mechanisms by which sensory input could distinguish different spatial locations. Changes in firing rate reflect running speed on long but not short time intervals, and few cells code movement direction, arguing against path integration for coding location. Instead, new evidence for neural coding of environmental boundaries in egocentric coordinates fits with a modelling framework in which egocentric coding of barriers combined with head direction generates distinct allocentric coding of location. The egocentric input can be used both for coding the location of spatiotemporal trajectories and for retrieving specific viewpoints of the environment. Overall, these different patterns of neural activity can be used for encoding and disambiguation of prior episodic spatiotemporal trajectories or for planning of future goal-directed spatiotemporal trajectories. 
    more » « less
  3. null (Ed.)
    Abstract

    Inkjet 3D printing has broad applications in areas such as health and energy due to its capability to precisely deposit micro-droplets of multi-functional materials. However, the droplet of the inkjet printing has different jetting behaviors including drop initiation, thinning, necking, pinching and flying, and they are vulnerable to disturbance from vibration, material inhomogeneity, etc. Such issues make it challenging to yield a consistent printing process and a defect-free final product with desired properties. Therefore, timely recognition of the droplet behavior is critical for inkjet printing quality assessment. In-situ video monitoring of the printing process paves a way for such recognition. In this paper, a novel feature identification framework is presented to recognize the spatiotemporal feature of in-situ monitoring videos for inkjet printing. Specifically, a spatiotemporal fusion network is used for droplet printing behavior classification. The categories are based on inkjet printability, which is related to both the static features (ligament, satellite, and meniscus) and dynamic features (ligament thinning, droplet pinch off, meniscus oscillation). For the recorded droplet jetting video data, two streams of networks, the frames sampled from video in spatial domain (associated with static features) and the optical flow in temporal domain (associated with dynamic features), are fused in different ways to recognize the droplet evolving behavior. Experiments results show that the proposed fusion network can recognize the droplet jetting behavior in the complex printing process and identify its printability with learned knowledge, which can ultimately enable the real-time inkjet printing quality control and further provide guidance to design optimal parameter settings for the inkjet printing process.

     
    more » « less
  4. Evolution has honed predatory skills in the natural world where localizing and intercepting fast-moving prey is required. The current generation of robotic systems mimics these biological systems using deep learning. High-speed processing of the camera frames using convolutional neural networks (CNN) (frame pipeline) on such constrained aerial edge-robots gets resource-limited. Adding more compute resources also eventually limits the throughput at the frame rate of the camera as frame-only traditional systems fail to capture the detailed temporal dynamics of the environment. Bio-inspired event cameras and spiking neural networks (SNN) provide an asynchronous sensor-processor pair (event pipeline) capturing the continuous temporal details of the scene for high-speed but lag in terms of accuracy. In this work, we propose a target localization system combining event-camera and SNN-based high-speed target estimation and frame-based camera and CNN-driven reliable object detection by fusing complementary spatio-temporal prowess of event and frame pipelines. One of our main contributions involves the design of an SNN filter that borrows from the neural mechanism for ego-motion cancelation in houseflies. It fuses the vestibular sensors with the vision to cancel the activity corresponding to the predator's self-motion. We also integrate the neuro-inspired multi-pipeline processing with task-optimized multi-neuronal pathway structure in primates and insects. The system is validated to outperform CNN-only processing using prey-predator drone simulations in realistic 3D virtual environments. The system is then demonstrated in a real-world multi-drone set-up with emulated event data. Subsequently, we use recorded actual sensory data from multi-camera and inertial measurement unit (IMU) assembly to show desired working while tolerating the realistic noise in vision and IMU sensors. We analyze the design space to identify optimal parameters for spiking neurons, CNN models, and for checking their effect on the performance metrics of the fused system. Finally, we map the throughput controlling SNN and fusion network on edge-compatible Zynq-7000 FPGA to show a potential 264 outputs per second even at constrained resource availability. This work may open new research directions by coupling multiple sensing and processing modalities inspired by discoveries in neuroscience to break fundamental trade-offs in frame-based computer vision 1 . 
    more » « less
  5. Unlike other predators that use vision as their primary sensory system, bats compute the three-dimensional (3D) position of flying insects from discrete echo snapshots, which raises questions about the strategies they employ to track and intercept erratically moving prey from interrupted sensory information. Here, we devised an ethologically inspired behavioral paradigm to directly test the hypothesis that echolocating bats build internal prediction models from dynamic acoustic stimuli to anticipate the future location of moving auditory targets. We quantified the direction of the bat’s head/sonar beam aim and echolocation call rate as it tracked a target that moved across its sonar field and applied mathematical models to differentiate between nonpredictive and predictive tracking behaviors. We discovered that big brown bats accumulate information across echo sequences to anticipate an auditory target’s future position. Further, when a moving target is hidden from view by an occluder during a portion of its trajectory, the bat continues to track its position using an internal model of the target’s motion path. Our findings also reveal that the bat increases sonar call rate when its prediction of target trajectory is violated by a sudden change in target velocity. This shows that the bat rapidly adapts its sonar behavior to update internal models of auditory target trajectories, which would enable tracking of evasive prey. Collectively, these results demonstrate that the echolocating big brown bat integrates acoustic snapshots over time to build prediction models of a moving auditory target’s trajectory and enable prey capture under conditions of uncertainty.

     
    more » « less