Unlike other predators that use vision as their primary sensory system, bats compute the three-dimensional (3D) position of flying insects from discrete echo snapshots, which raises questions about the strategies they employ to track and intercept erratically moving prey from interrupted sensory information. Here, we devised an ethologically inspired behavioral paradigm to directly test the hypothesis that echolocating bats build internal prediction models from dynamic acoustic stimuli to anticipate the future location of moving auditory targets. We quantified the direction of the bat’s head/sonar beam aim and echolocation call rate as it tracked a target that moved across its sonar field and applied mathematical models to differentiate between nonpredictive and predictive tracking behaviors. We discovered that big brown bats accumulate information across echo sequences to anticipate an auditory target’s future position. Further, when a moving target is hidden from view by an occluder during a portion of its trajectory, the bat continues to track its position using an internal model of the target’s motion path. Our findings also reveal that the bat increases sonar call rate when its prediction of target trajectory is violated by a sudden change in target velocity. This shows that the bat rapidly adapts its sonar behavior to update internal models of auditory target trajectories, which would enable tracking of evasive prey. Collectively, these results demonstrate that the echolocating big brown bat integrates acoustic snapshots over time to build prediction models of a moving auditory target’s trajectory and enable prey capture under conditions of uncertainty.
more »
« less
Visual cues enhance obstacle avoidance in echolocating bats
ABSTRACT Studies have shown that bats are capable of using visual information for a variety of purposes, including navigation and foraging, but the relative contributions of visual and auditory modalities in obstacle avoidance has yet to be fully investigated, particularly in laryngeal echolocating bats. A first step requires the characterization of behavioral responses to different combinations of sensory cues. Here, we quantified the behavioral responses of the insectivorous big brown bat, Eptesicus fuscus, in an obstacle avoidance task offering different combinations of auditory and visual cues. To do so, we utilized a new method that eliminates the confounds typically associated with testing bat vision and precludes auditory cues. We found that the presence of visual and auditory cues together enhances bats' avoidance response to obstacles compared with cues requiring either vision or audition alone. Analyses of flight and echolocation behaviors, such as speed and call rate, did not vary significantly under different obstacle conditions, and thus are not informative indicators of a bat's response to obstacle stimulus type. These findings advance the understanding of the relative importance of visual and auditory sensory modalities in guiding obstacle avoidance behaviors.
more »
« less
- Award ID(s):
- 1734744
- PAR ID:
- 10280819
- Date Published:
- Journal Name:
- Journal of Experimental Biology
- Volume:
- 224
- Issue:
- 9
- ISSN:
- 0022-0949
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Speech processing often occurs amid competing inputs from other modalities, for example, listening to the radio while driving. We examined the extent to which dividing attention between auditory and visual modalities (bimodal divided attention) impacts neural processing of natural continuous speech from acoustic to linguistic levels of representation. We recorded electroencephalographic (EEG) responses when human participants performed a challenging primary visual task, imposing low or high cognitive load while listening to audiobook stories as a secondary task. The two dual-task conditions were contrasted with an auditory single-task condition in which participants attended to stories while ignoring visual stimuli. Behaviorally, the high load dual-task condition was associated with lower speech comprehension accuracy relative to the other two conditions. We fitted multivariate temporal response function encoding models to predict EEG responses from acoustic and linguistic speech features at different representation levels, including auditory spectrograms and information-theoretic models of sublexical-, word-form-, and sentence-level representations. Neural tracking of most acoustic and linguistic features remained unchanged with increasing dual-task load, despite unambiguous behavioral and neural evidence of the high load dual-task condition being more demanding. Compared to the auditory single-task condition, dual-task conditions selectively reduced neural tracking of only some acoustic and linguistic features, mainly at latencies >200 ms, while earlier latencies were surprisingly unaffected. These findings indicate that behavioral effects of bimodal divided attention on continuous speech processing occur not because of impaired early sensory representations but likely at later cognitive processing stages. Crossmodal attention-related mechanisms may not be uniform across different speech processing levels.more » « less
-
In virtual environments, many social cues (e.g. gestures, eye contact, and proximity) are currently conveyed visually or auditorily. Indicating social cues in other modalities, such as haptic cues to complement visual or audio signals, will help to increase VR’s accessibility and take advantage of the platform’s inherent flexibility. However, accessibility implementations in social VR are often siloed by single sensory modalities. To broaden the accessibility of social virtual reality beyond replacing one sensory modality with another, we identified a subset of social cues and built tools to enhance them allowing users to switch between modalities to choose how these cues are represented. Because consumer VR uses primarily visual and auditory stimuli, we started with social cues that were not accessible for blind and low vision (BLV) and d/Deaf and hard of hearing (DHH) people, and expanded how they could be represented to accommodate a number of needs. We describe how these tools were designed around the principle of social cue switching, and a standard distribution method to amplify reach.more » « less
-
N100, the negative peak of electrical response occurring around 100 ms, is present in diverse functional paradigms including auditory, visual, somatic, behavioral and cognitive tasks. We hypothesized that the presence of the N100 across different paradigms may be indicative of a more general property of the cerebral cortex regardless of functional or anatomic specificity. To test this hypothesis, we combined transcranial magnetic stimulation (TMS) and electroencephalography (EEG) to measure cortical excitability by TMS across cortical regions without relying on specific sensory, cognitive or behavioral modalities. The five stimulated regions included left prefrontal, left motor, left primary auditory cortices, the vertex and posterior cerebellum with stimulations performed using supra- and subthreshold intensities. EEG responses produced by TMS stimulation at the five locations all generated N100s that peaked at the vertex. The amplitudes of the N100s elicited by these five diverse cortical origins were statistically not significantly different (all uncorrected p > 0.05). No other EEG response components were found to have this global property of N100. Our findings suggest that anatomy- and modality-specific interpretation of N100 should be carefully evaluated, and N100 by TMS may be used as a bio-marker for evaluating local versus general cortical properties across the brain.more » « less
-
null (Ed.)Abstract Information processing under conditions of uncertainty requires the involvement of cognitive control. Despite behavioral evidence of the supramodal function (i.e., independent of sensory modality) of cognitive control, the underlying neural mechanism needs to be directly tested. This study used functional magnetic imaging together with visual and auditory perceptual decision-making tasks to examine brain activation as a function of uncertainty in the two stimulus modalities. The results revealed a monotonic increase in activation in the cortical regions of the cognitive control network (CCN) as a function of uncertainty in the visual and auditory modalities. The intrinsic connectivity between the CCN and sensory regions was similar for the visual and auditory modalities. Furthermore, multivariate patterns of activation in the CCN predicted the level of uncertainty within and across stimulus modalities. These findings suggest that the CCN implements cognitive control by processing uncertainty as abstract information independent of stimulus modality.more » « less