skip to main content


Title: Multisensory integration facilitates perceptual restoration of an interrupted call in a species of frog
Abstract Communication signals by both human and non-human animals are often interrupted in nature. One advantage of multimodal cues is to maintain the salience of interrupted signals. We studied a frog that naturally can have silent gaps within its call. Using video/audio-playbacks, we presented females with interrupted mating calls with or without a simultaneous dynamic (i.e., inflating and deflating) vocal sac and tested whether multisensory cues (noise and/or dynamic vocal sac) inserted into the gap can compensate an interrupted call. We found that neither inserting white noise into the silent gap of an interrupted call nor displaying the dynamic vocal sac in that same gap restored the attraction of the call equivalent to that of a complete call. Simultaneously presenting a dynamic vocal sac along with noise in the gap, however, compensated the interrupted call, making it as attractive as a complete call. Our results demonstrate that the dynamic visual sac compensates for noise interference. Such novel multisensory integration suggests that multimodal cues can provide insurance against imperfect sender coding in a noisy environment, and the communication benefits to the receiver from multisensory integration may be an important selective force favoring multimodal signal evolution.  more » « less
Award ID(s):
1914652
NSF-PAR ID:
10419445
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Editor(s):
Jennions, Michael D
Date Published:
Journal Name:
Behavioral Ecology
Volume:
33
Issue:
4
ISSN:
1045-2249
Page Range / eLocation ID:
876 to 883
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Candolin, Ulrika (Ed.)
    Abstract Females of many species choose mates using multiple sensory modalities. Multimodal noise may arise, however, in dense aggregations of animals communicating via multiple sensory modalities. Some evidence suggests multimodal signals may not always improve receiver decision-making performance. When sensory systems process input from multimodal signal sources, multimodal noise may arise and potentially complicate decision-making due to the demands on cognitive integration tasks. We tested female túngara frog, Physalaemus (=Engystomops) pustulosus, responses to male mating signals in noise from multiple sensory modalities (acoustic and visual). Noise treatments were partitioned into three categories: acoustic, visual, and multimodal. We used natural calls from conspecifics and heterospecifics for acoustic noise. Robotic frogs were employed as either visual signal components (synchronous vocal sac inflation with call) or visual noise (asynchronous vocal sac inflation with call). Females expressed a preference for the typically more attractive call in the presence of unimodal noise. However, during multimodal signal and noise treatments (robofrogs employed with background noise), females failed to express a preference for the typically attractive call in the presence of conspecific chorus noise. We found that social context and temporal synchrony of multimodal signaling components are important for multimodal communication. Our results demonstrate that multimodal signals have the potential to increase the complexity of the sensory scene and reduce the efficacy of female decision making. 
    more » « less
  2. null (Ed.)
    Communication systems often include a variety of components, including those that span modalities, which may facilitate detection and decision-making. For example, female tungara frogs and fringe-lipped bats generally rely on acoustic mating signals to find male tungara frogs in a mating or foraging context, respectively. However, two additional cues (vocal sac inflation and water ripples) can enhance detection and choice behavior. To date, we do not know the natural variation and covariation of these three components. To address this, we made detailed recordings of calling males, including call amplitude, vocal sac volume and water ripple height, in 54 frogs (2430 calls). We found that all three measures correlated, with the strongest association between the vocal sac volume and call amplitude. We also found that multimodal models predicted the mass of calling males better than unimodal models. These results demonstrate how multimodal components of a communication system relate to each other and provide an important foundation for future studies on how receivers integrate and compare complex displays. 
    more » « less
  3. Humans convey their intentions through the usage of both verbal and nonverbal behaviors during face-to-face communication. Speaker intentions often vary dynamically depending on different nonverbal contexts, such as vocal patterns and facial expressions. As a result, when modeling human language, it is essential to not only consider the literal meaning of the words but also the nonverbal contexts in which these words appear. To better model human language, we first model expressive nonverbal representations by analyzing the fine-grained visual and acoustic patterns that occur during word segments. In addition, we seek to capture the dynamic nature of nonverbal intents by shifting word representations based on the accompanying nonverbal behaviors. To this end, we propose the Recurrent Attended Variation Embedding Network (RAVEN) that models the fine-grained structure of nonverbal subword sequences and dynamically shifts word representations based on nonverbal cues. Our proposed model achieves competitive performance on two publicly available datasets for multimodal sentiment analysis and emotion recognition. We also visualize the shifted word representations in different nonverbal contexts and summarize common patterns regarding multimodal variations of word representations. 
    more » « less
  4. Observing how infants and mothers coordinate their behaviors can highlight meaningful patterns in early communication and infant development. While dyads often differ in the modalities they use to communicate, especially in the first year of life, it remains unclear how to capture coordination across multiple types of behaviors using existing computational models of interpersonal synchrony. This paper explores Dynamic Mode Decomposition with control (DMDc) as a method of integrating multiple signals from each communicating partner into a model of multimodal behavioral coordination. We used an existing video dataset to track the head pose, arm pose, and vocal fundamental frequency of infants and mothers during the Face-to-Face Still-Face (FFSF) procedure, a validated 3-stage interaction paradigm. For each recorded interaction, we fit both unimodal and multimodal DMDc models to the extracted pose data. The resulting dynamic characteristics of the models were analyzed to evaluate trends in individual behaviors and dyadic processes across infant age and stages of the interactions. Results demonstrate that observed trends in interaction dynamics across stages of the FFSF protocol were stronger and more significant when models incorporated both head and arm pose data, rather than a single behavior modality. Model output showed significant trends across age, identifying changes in infant movement and in the relationship between infant and mother behaviors. Models that included mothers’ audio data demonstrated similar results to those evaluated with pose data, confirming that DMDc can leverage different sets of behavioral signals from each interacting partner. Taken together, our results demonstrate the potential of DMDc toward integrating multiple behavioral signals into the measurement of multimodal interpersonal coordination. 
    more » « less
  5. Abstract

    Studies of acoustic communication often focus on the categories and units of vocalizations, but subtle variation also occurs in how these signals are uttered. In human speech, it is not only phonemes and words that carry information but also the timbre, intonation, and stress of how speech sounds are delivered (often referred to as “paralinguistic content”). In non-human animals, variation across utterances of vocal signals also carries behaviorally relevant information across taxa. However, the discriminability of these cues has been rarely tested in a psychophysical paradigm. Here, we focus on acoustic communication in the zebra finch (Taeniopygia guttata), a songbird species in which the male produces a single stereotyped motif repeatedly in song bouts. These motif renditions, like the song repetitions of many birds, sound very similar to the casual human listener. In this study, we show that zebra finches can easily discriminate between the renditions, even at the level of single song syllables, much as humans can discriminate renditions of speech sounds. These results support the notion that sensitivity to fine acoustic details may be a primary channel of information in zebra finch song, as well as a shared, foundational property of vocal communication systems across species.

     
    more » « less