skip to main content


Title: Maximally efficient prediction in the early fly visual system may support evasive flight maneuvers
The visual system must make predictions to compensate for inherent delays in its processing. Yet little is known, mechanistically, about how prediction aids natural behaviors. Here, we show that despite a 20-30ms intrinsic processing delay, the vertical motion sensitive (VS) network of the blowfly achieves maximally efficient prediction. This prediction enables the fly to fine-tune its complex, yet brief, evasive flight maneuvers according to its initial ego-rotation at the time of detection of the visual threat. Combining a rich database of behavioral recordings with detailed compartmental modeling of the VS network, we further show that the VS network has axonal gap junctions that are critical for optimal prediction. During evasive maneuvers, a VS subpopulation that directly innervates the neck motor center can convey predictive information about the fly’s future ego-rotation, potentially crucial for ongoing flight control. These results suggest a novel sensory-motor pathway that links sensory prediction to behavior.  more » « less
Award ID(s):
1734030 1652617
NSF-PAR ID:
10249114
Author(s) / Creator(s):
; ; ;
Editor(s):
Graham, Lyle J.
Date Published:
Journal Name:
PLOS Computational Biology
Volume:
17
Issue:
5
ISSN:
1553-7358
Page Range / eLocation ID:
e1008965
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Traditional models of motor control typically operate in the domain of continuous signals such as spike rates, forces, and kinematics. However, there is growing evidence that precise spike timings encode significant information that coordinates and causally influences motor control. Some existing neural network models incorporate spike timing precision but they neither predict motor spikes coordinated across multiple motor units nor capture sensory-driven modulation of agile locomotor control. In this paper, we propose a visual encoder and model of a sensorimotor system based on a recurrent neural network (RNN) that utilizes spike timing encoding during smooth pursuit target tracking. We use this to predict a nearly complete, spike-resolved motor program of a hawkmoth that requires coordinated millisecond precision across 10 major flight motor units. Each motor unit enervates one muscle and utilizes both rate and timing encoding. Our model includes a motion detection mechanism inspired by the hawkmoth's compound eye, a convolutional encoder that compresses the sensory input, and a simple RNN that is sufficient to sequentially predict wingstroke-to-wingstroke modulation in millisecond-precise spike timings. The two-layer output architecture of the RNN separately predicts the occurrence and timing of each spike in the motor program. The dataset includes spikes recorded from all motor units during a tethered flight where the hawkmoth attends to a moving robotic flower, with a total of roughly 7000 wingstrokes from 16 trials on 5 hawkmoth subjects. Intra-trial and same-subject inter-trial predictions on the test data show that nearly every spike can be predicted within 2 ms of its known spike timing precision values. Whereas, spike occurrence prediction accuracy is about 90%. Overall, our model can predict the precise spike timing of a nearly complete motor program for hawkmoth flight with a precision comparable to that seen in agile flying insects. Such an encoding framework that captures visually-modulated precise spike timing codes and coordination can reveal how organisms process visual cues for agile movements. It can also drive the next generation of neuromorphic controllers for navigation in complex environments. 
    more » « less
  2. Unlike other predators that use vision as their primary sensory system, bats compute the three-dimensional (3D) position of flying insects from discrete echo snapshots, which raises questions about the strategies they employ to track and intercept erratically moving prey from interrupted sensory information. Here, we devised an ethologically inspired behavioral paradigm to directly test the hypothesis that echolocating bats build internal prediction models from dynamic acoustic stimuli to anticipate the future location of moving auditory targets. We quantified the direction of the bat’s head/sonar beam aim and echolocation call rate as it tracked a target that moved across its sonar field and applied mathematical models to differentiate between nonpredictive and predictive tracking behaviors. We discovered that big brown bats accumulate information across echo sequences to anticipate an auditory target’s future position. Further, when a moving target is hidden from view by an occluder during a portion of its trajectory, the bat continues to track its position using an internal model of the target’s motion path. Our findings also reveal that the bat increases sonar call rate when its prediction of target trajectory is violated by a sudden change in target velocity. This shows that the bat rapidly adapts its sonar behavior to update internal models of auditory target trajectories, which would enable tracking of evasive prey. Collectively, these results demonstrate that the echolocating big brown bat integrates acoustic snapshots over time to build prediction models of a moving auditory target’s trajectory and enable prey capture under conditions of uncertainty.

     
    more » « less
  3. null (Ed.)
    Abstract Humans can operate a variety of modern tools, which are often associated with different visuomotor transformations. Studies investigating this ability have shown that separate motor memories can be acquired implicitly when different sensorimotor transformations are associated with distinct (intended) postures or explicitly when abstract contextual cues are leveraged by aiming strategies. It still remains unclear how different transformations are remembered implicitly when postures are similar. We investigated whether features of planning to manipulate a visual tool, such as its visual identity or the environmental effect intended by its use (i.e. action effect) would enable implicit learning of opposing visuomotor rotations. Results show that neither contextual cue led to distinct implicit motor memories, but that cues only affected implicit adaptation indirectly through generalization around explicit strategies. In contrast, a control experiment where participants practiced opposing transformations with different hands did result in contextualized aftereffects differing between hands across generalization targets. It appears that different (intended) body states are necessary for separate aftereffects to emerge, suggesting that the role of sensory prediction error-based adaptation may be limited to the recalibration of a body model, whereas establishing separate tool models may proceed along a different route. 
    more » « less
  4. The objective of this article was to review existing research to assess the evidence for predictive processing (PP) in sign language, the conditions under which it occurs, and the effects of language mastery (sign language as a first language, sign language as a second language, bimodal bilingualism) on the neural bases of PP. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. We searched peer-reviewed electronic databases (SCOPUS, Web of Science, PubMed, ScienceDirect, and EBSCO host) and gray literature (dissertations in ProQuest). We also searched the reference lists of records selected for the review and forward citations to identify all relevant publications. We searched for records based on five criteria (original work, peer-reviewed, published in English, research topic related to PP or neural entrainment, and human sign language processing). To reduce the risk of bias, the remaining two authors with expertise in sign language processing and a variety of research methods reviewed the results. Disagreements were resolved through extensive discussion. In the final review, 7 records were included, of which 5 were published articles and 2 were dissertations. The reviewed records provide evidence for PP in signing populations, although the underlying mechanism in the visual modality is not clear. The reviewed studies addressed the motor simulation proposals, neural basis of PP, as well as the development of PP. All studies used dynamic sign stimuli. Most of the studies focused on semantic prediction. The question of the mechanism for the interaction between one’s sign language competence (L1 vs. L2 vs. bimodal bilingual) and PP in the manual-visual modality remains unclear, primarily due to the scarcity of participants with varying degrees of language dominance. There is a paucity of evidence for PP in sign languages, especially for frequency-based, phonetic (articulatory), and syntactic prediction. However, studies published to date indicate that Deaf native/native-like L1 signers predict linguistic information during sign language processing, suggesting that PP is an amodal property of language processing. Systematic Review Registration [ https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021238911 ], identifier [CRD42021238911]. 
    more » « less
  5. Abstract

    Head movement relative to the stationary environment gives rise to congruent vestibular and visual optic-flow signals. The resulting perception of a stationary visual environment, referred to herein as stationarity perception, depends on mechanisms that compare visual and vestibular signals to evaluate their congruence. Here we investigate the functioning of these mechanisms and their dependence on fixation behavior as well as on the activeversuspassive nature of the head movement. Stationarity perception was measured by modifying the gain on visual motion relative to head movement on individual trials and asking subjects to report whether the gain was too low or too high. Fitting a psychometric function to the data yields two key parameters of performance. The mean is a measure of accuracy, and the standard deviation is a measure of precision. Experiments were conducted using a head-mounted display with fixation behavior monitored by an embedded eye tracker. During active conditions, subjects rotated their heads in yaw ∼15 deg/s over ∼1 s. Each subject’s movements were recorded and played backviarotating chair during the passive condition. During head-fixed and scene-fixed fixation the fixation target moved with the head or scene, respectively. Both precision and accuracy were better during active than passive head movement, likely due to increased precision on the head movement estimate arising from motor prediction and neck proprioception. Performance was also better during scene-fixed than head-fixed fixation, perhaps due to decreased velocity of retinal image motion and increased precision on the retinal image motion estimate. These results reveal how the nature of head and eye movements mediate encoding, processing, and comparison of relevant sensory and motor signals.

     
    more » « less