Acoustic indices are an efficient method for monitoring dense aggregations of vocal animals but require understanding the acoustic ecology of the species under examination. The present understanding of avian behavior and vocal development is primarily derived from the research of songbirds (Passeriformes). However, given that behavior and environment can differ greatly among bird orders, passerine birdsong may be insufficient to define the vocal ontogeny of non-passerine birds. Like many colonial nesting seabirds, the Adélie penguin (Pygoscelis adeliae) is adapted to loud and congested environments with limited cues to identify kinship within aggregations of conspecifics. In addition to physical or geographical cues to identify offspring, adult P. adeliae rely on vocal modulation. Numerous studies have been conducted on mutual vocal modulations in mature P. adeliae, but limited research has explored the vocal repertoire of the chicks and how their vocalizations evolve over time. Using the deep learning-based system, DeepSqueak, this study characterized the vocal ontogeny of P. adeliae chicks in the West Antarctic Peninsula to aid in autonomously tracking their age. Understanding the phenological communication patterns of vocal-dependent seabirds can help measure the impact of climate change on this indicator species through non-invasive methods.
more »
« less
Chomsky and Beyond
Our culture may prefer vocal communication, but little is lost in communicating through overt bodily movements, as in sign languages. Indeed, a gestural system may have preceded a vocal one in evolution.
more »
« less
- Award ID(s):
- 1733984
- PAR ID:
- 10279299
- Date Published:
- Journal Name:
- Inference: International Review of Science
- Volume:
- 6
- Issue:
- 2
- ISSN:
- 2576-4403
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract For many animal species, vocal communication is a critical social behavior and often a necessary component of reproductive success. Additionally, vocalizations are often demanding motor acts. Wanting to know whether a specific molecular toolkit might be required for vocalization, we used RNA‐sequencing to investigate neural gene expression underlying the performance of an extreme vocal behavior, the courtship hum of the plainfin midshipman fish (Porichthys notatus). Single hums can last up to 2 h and may be repeated throughout an evening of courtship activity. We asked whether vocal behavioral states are associated with specific gene expression signatures in key brain regions that regulate vocalization by comparing transcript expression levels in humming versus non‐humming males. We find that the circadian‐related genesperiod3andClockare significantly upregulated in the vocal motor nucleus and preoptic area‐anterior hypothalamus, respectively, in humming compared with non‐humming males, indicating that internal circadian clocks may differ between these divergent behavioral states. In addition, we identify suites of differentially expressed genes related to synaptic transmission, ion channels and transport, neuropeptide and hormone signaling, and metabolism and antioxidant activity that together may support the neural and energetic demands of humming behavior. Comparisons of transcript expression across regions stress regional differences in brain gene expression, while also showing coordinated gene regulation in the vocal motor circuit in preparation for courtship behavior. These results underscore the role of differential gene expression in shifts between behavioral states, in this case neuroendocrine, motor and circadian control of courtship vocalization.more » « less
-
The expression of human emotion is integral to social interaction, and in virtual reality it is increasingly common to develop virtual avatars that attempt to convey emotions by mimicking these visual and aural cues, i.e. the facial and vocal expressions. However, errors in (or the absence of) facial tracking can result in the rendering of incorrect facial expressions on these virtual avatars. For example, a virtual avatar may speak with a happy or unhappy vocal inflection while their facial expression remains otherwise neutral. In circumstances where there is conflict between the avatar's facial and vocal expressions, it is possible that users will incorrectly interpret the avatar's emotion, which may have unintended consequences in terms of social influence or in terms of the outcome of the interaction. In this paper, we present a human-subjects study (N = 22) aimed at understanding the impact of conflicting facial and vocal emotional expressions. Specifically we explored three levels of emotional valence (unhappy, neutral, and happy) expressed in both visual (facial) and aural (vocal) forms. We also investigate three levels of head scales (down-scaled, accurate, and up-scaled) to evaluate whether head scale affects user interpretation of the conveyed emotion. We find significant effects of different multimodal expressions on happiness and trust perception, while no significant effect was observed for head scales. Evidence from our results suggest that facial expressions have a stronger impact than vocal expressions. Additionally, as the difference between the two expressions increase, the less predictable the multimodal expression becomes. For example, for the happy-looking and happy-sounding multimodal expression, we expect and see high happiness rating and high trust, however if one of the two expressions change, this mismatch makes the expression less predictable. We discuss the relationships, implications, and guidelines for social applications that aim to leverage multimodal social cues.more » « less
-
Vocal production learning (“vocal learning”) is a convergently evolved trait in vertebrates. To identify brain genomic elements associated with mammalian vocal learning, we integrated genomic, anatomical, and neurophysiological data from the Egyptian fruit bat (Rousettus aegyptiacus) with analyses of the genomes of 215 placental mammals. First, we identified a set of proteins evolving more slowly in vocal learners. Then, we discovered a vocal motor cortical region in the Egyptian fruit bat, an emergent vocal learner, and leveraged that knowledge to identify active cis-regulatory elements in the motor cortex of vocal learners. Machine learning methods applied to motor cortex open chromatin revealed 50 enhancers robustly associated with vocal learning whose activity tended to be lower in vocal learners. Our research implicates convergent losses of motor cortex regulatory elements in mammalian vocal learning evolution.more » « less
-
Jennions, Michael D (Ed.)Abstract Communication signals by both human and non-human animals are often interrupted in nature. One advantage of multimodal cues is to maintain the salience of interrupted signals. We studied a frog that naturally can have silent gaps within its call. Using video/audio-playbacks, we presented females with interrupted mating calls with or without a simultaneous dynamic (i.e., inflating and deflating) vocal sac and tested whether multisensory cues (noise and/or dynamic vocal sac) inserted into the gap can compensate an interrupted call. We found that neither inserting white noise into the silent gap of an interrupted call nor displaying the dynamic vocal sac in that same gap restored the attraction of the call equivalent to that of a complete call. Simultaneously presenting a dynamic vocal sac along with noise in the gap, however, compensated the interrupted call, making it as attractive as a complete call. Our results demonstrate that the dynamic visual sac compensates for noise interference. Such novel multisensory integration suggests that multimodal cues can provide insurance against imperfect sender coding in a noisy environment, and the communication benefits to the receiver from multisensory integration may be an important selective force favoring multimodal signal evolution.more » « less
An official website of the United States government

