skip to main content

Title: Dynamic representation of 3D auditory space in the midbrain of the free-flying echolocating bat
Essential to spatial orientation in the natural environment is a dynamic representation of direction and distance to objects. Despite the importance of 3D spatial localization to parse objects in the environment and to guide movement, most neurophysiological investigations of sensory mapping have been limited to studies of restrained subjects, tested with 2D, artificial stimuli. Here, we show for the first time that sensory neurons in the midbrain superior colliculus (SC) of the free-flying echolocating bat encode 3D egocentric space, and that the bat’s inspection of objects in the physical environment sharpens tuning of single neurons, and shifts peak responses to represent closer distances. These findings emerged from wireless neural recordings in free-flying bats, in combination with an echo model that computes the animal’s instantaneous stimulus space. Our research reveals dynamic 3D space coding in a freely moving mammal engaged in a real-world navigation task.
Authors:
; ;
Award ID(s):
1734744
Publication Date:
NSF-PAR ID:
10104465
Journal Name:
eLife
Volume:
7
ISSN:
2050-084X
Sponsoring Org:
National Science Foundation
More Like this
  1. Neurophysiological recordings in behaving rodents demonstrate neuronal response properties that may code space and time for episodic memory and goal-directed behaviour. Here, we review recordings from hippocampus, entorhinal cortex, and retrosplenial cortex to address the problem of how neurons encode multiple overlapping spatiotemporal trajectories and disambiguate these for accurate memory-guided behaviour. The solution could involve neurons in the entorhinal cortex and hippocampus that show mixed selectivity, coding both time and location. Some grid cells and place cells that code space also respond selectively as time cells, allowing differentiation of time intervals when a rat runs in the same location duringmore »a delay period. Cells in these regions also develop new representations that differentially code the context of prior or future behaviour allowing disambiguation of overlapping trajectories. Spiking activity is also modulated by running speed and head direction, supporting the coding of episodic memory not as a series of snapshots but as a trajectory that can also be distinguished on the basis of speed and direction. Recent data also address the mechanisms by which sensory input could distinguish different spatial locations. Changes in firing rate reflect running speed on long but not short time intervals, and few cells code movement direction, arguing against path integration for coding location. Instead, new evidence for neural coding of environmental boundaries in egocentric coordinates fits with a modelling framework in which egocentric coding of barriers combined with head direction generates distinct allocentric coding of location. The egocentric input can be used both for coding the location of spatiotemporal trajectories and for retrieving specific viewpoints of the environment. Overall, these different patterns of neural activity can be used for encoding and disambiguation of prior episodic spatiotemporal trajectories or for planning of future goal-directed spatiotemporal trajectories.« less
  2. Abstract

    Inkjet 3D printing has broad applications in areas such as health and energy due to its capability to precisely deposit micro-droplets of multi-functional materials. However, the droplet of the inkjet printing has different jetting behaviors including drop initiation, thinning, necking, pinching and flying, and they are vulnerable to disturbance from vibration, material inhomogeneity, etc. Such issues make it challenging to yield a consistent printing process and a defect-free final product with desired properties. Therefore, timely recognition of the droplet behavior is critical for inkjet printing quality assessment. In-situ video monitoring of the printing process paves a way for suchmore »recognition. In this paper, a novel feature identification framework is presented to recognize the spatiotemporal feature of in-situ monitoring videos for inkjet printing. Specifically, a spatiotemporal fusion network is used for droplet printing behavior classification. The categories are based on inkjet printability, which is related to both the static features (ligament, satellite, and meniscus) and dynamic features (ligament thinning, droplet pinch off, meniscus oscillation). For the recorded droplet jetting video data, two streams of networks, the frames sampled from video in spatial domain (associated with static features) and the optical flow in temporal domain (associated with dynamic features), are fused in different ways to recognize the droplet evolving behavior. Experiments results show that the proposed fusion network can recognize the droplet jetting behavior in the complex printing process and identify its printability with learned knowledge, which can ultimately enable the real-time inkjet printing quality control and further provide guidance to design optimal parameter settings for the inkjet printing process.

    « less
  3. This work presents a platform that integrates a customized MRI data acquisition scheme with reconstruction and three-dimensional (3D) visualization modules along with a module for controlling an MRI-compatible robotic device to facilitate the performance of robot-assisted, MRI-guided interventional procedures. Using dynamically-acquired MRI data, the computational framework of the platform generates and updates a 3D model representing the area of the procedure (AoP). To image structures of interest in the AoP that do not reside inside the same or parallel slices, the MRI acquisition scheme was modified to collect a multi-slice set of intraoblique to each other slices; which are termedmore »composing slices. Moreover, this approach interleaves the collection of the composing slices so the same k-space segments of all slices are collected during similar time instances. This time matching of the k-space segments results in spatial matching of the imaged objects in the individual composing slices. The composing slices were used to generate and update the 3D model of the AoP. The MRI acquisition scheme was evaluated with computer simulations and experimental studies. Computer simulations demonstrated that k-space segmentation and time-matched interleaved acquisition of these segments provide spatial matching of the structures imaged with composing slices. Experimental studies used the platform to image the maneuvering of an MRI-compatible manipulator that carried tubing filled with MRI contrast agent. In vivo experimental studies to image the abdomen and contrast enhanced heart on free-breathing subjects without cardiac triggering demonstrated spatial matching of imaged anatomies in the composing planes. The described interventional MRI framework could assist in performing real-time MRI-guided interventions.« less
  4. Unlike other predators that use vision as their primary sensory system, bats compute the three-dimensional (3D) position of flying insects from discrete echo snapshots, which raises questions about the strategies they employ to track and intercept erratically moving prey from interrupted sensory information. Here, we devised an ethologically inspired behavioral paradigm to directly test the hypothesis that echolocating bats build internal prediction models from dynamic acoustic stimuli to anticipate the future location of moving auditory targets. We quantified the direction of the bat’s head/sonar beam aim and echolocation call rate as it tracked a target that moved across its sonarmore »field and applied mathematical models to differentiate between nonpredictive and predictive tracking behaviors. We discovered that big brown bats accumulate information across echo sequences to anticipate an auditory target’s future position. Further, when a moving target is hidden from view by an occluder during a portion of its trajectory, the bat continues to track its position using an internal model of the target’s motion path. Our findings also reveal that the bat increases sonar call rate when its prediction of target trajectory is violated by a sudden change in target velocity. This shows that the bat rapidly adapts its sonar behavior to update internal models of auditory target trajectories, which would enable tracking of evasive prey. Collectively, these results demonstrate that the echolocating big brown bat integrates acoustic snapshots over time to build prediction models of a moving auditory target’s trajectory and enable prey capture under conditions of uncertainty.

    « less
  5. In the next wave of swarm-based applications, unmanned aerial vehicles (UAVs) need to communicate with peer drones in any direction of a three-dimensional (3D) space. On a given drone and across drones, various antenna positions and orientations are possible. We know that, in free space, high levels of signal loss are expected if the transmitting and receiving antennas are cross polarized. However, increasing the reflective and scattering objects in the channel between a transmitter and receiver can cause the received polarization to become completely independent from the transmitted polarization, making the cross-polarization of antennas insignificant. Usually, these effects are studiedmore »in the context of cellular and terrestrial networks and have not been analyzed when those objects are the actual bodies of the communicating drones that can take different relative directions or move at various elevations. In this work, we show that the body of the drone can affect the received power across various antenna orientations and positions and act as a local scatterer that increases channel depolarization, reducing the cross-polarization discrimination (XPD). To investigate these effects, we perform experimentation that is staged in terms of complexity from a controlled environment of an anechoic chamber with and without drone bodies to in-field environments where drone-mounted antennas are in-flight with various orientations and relative positions with the following outcomes: (i.) drone relative direction can significantly impact the XPD values, (ii.) elevation angle is a critical factor in 3D link performance, (iii.) antenna spacing requirements are altered for co-located cross-polarized antennas, and (iv.) cross-polarized antenna setups more than double spectral efficiency. Our results can serve as a guide for accurately simulating and modeling UAV networks and drone swarms.« less