Many explosive astrophysical events, like magnetars’ bursts and flares, are magnetically driven. We consider dynamics of such magnetic explosions—relativistic expansion of highly magnetized and highly magnetically overpressurized clouds. The corresponding dynamics are qualitatively different from fluid explosions due to the topological constraint of the conservation of the magnetic flux. Using analytical, relativistic MHD as well as force-free calculations, we find that the creation of a relativistically expanding, causally disconnected flow obeys a threshold condition: it requires sufficiently high initial overpressure and a sufficiently quick decrease of the pressure in the external medium (the preexplosion wind). In the subcritical case the magnetic cloud just “puffs up” and quietly expands with the preflare wind. We also find a compact analytical solution to Prendergast’s problem—expansion of force-free plasma into a vacuum.
Medical reports and news sources raise the possibility that flows created during breathing, speaking, laughing, singing, or exercise could be the means by which asymptomatic individuals contribute to spread of the SARS-CoV-2 virus. We use experiments and simulations to quantify how exhaled air is transported in speech. Phonetic characteristics introduce complexity to the airflow dynamics and plosive sounds, such as “P,” produce intense vortical structures that behave like “puffs” and rapidly reach 1 m. However, speech, corresponding to a train of such puffs, creates a conical, turbulent, jet-like flow and easily produces directed transport over 2 m in 30 s of conversation. This work should inform public health guidance for risk reduction and mitigation strategies of airborne pathogen transmission.
more » « less- Award ID(s):
- 2029370
- NSF-PAR ID:
- 10194524
- Publisher / Repository:
- Proceedings of the National Academy of Sciences
- Date Published:
- Journal Name:
- Proceedings of the National Academy of Sciences
- Volume:
- 117
- Issue:
- 41
- ISSN:
- 0027-8424
- Format(s):
- Medium: X Size: p. 25237-25245
- Size(s):
- p. 25237-25245
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract Music and language are two fundamental forms of human communication. Many studies examine the development of music‐ and language‐specific knowledge, but few studies compare how listeners know they are listening to music or language. Although we readily differentiate these domains, how we distinguish music and language—and especially speech and song— is not obvious. In two studies, we asked how listeners categorize speech and song. Study 1 used online survey data to illustrate that 4‐ to 17‐year‐olds and adults have verbalizable distinctions for speech and song. At all ages, listeners described speech and song differences based on acoustic features, but compared with older children, 4‐ to 7‐year‐olds more often used volume to describe differences, suggesting that they are still learning to identify the features most useful for differentiating speech from song. Study 2 used a perceptual categorization task to demonstrate that 4–8‐year‐olds and adults readily categorize speech and song, but this ability improves with age especially for identifying song. Despite generally rating song as more speech‐like, 4‐ and 6‐year‐olds rated ambiguous speech–song stimuli as more song‐like than 8‐year‐olds and adults. Four acoustic features predicted song ratings: F0 instability, utterance duration, harmonicity, and spectral flux. However, 4‐ and 6‐year‐olds’ song ratings were better predicted by F0 instability than by harmonicity and utterance duration. These studies characterize how children develop conceptual and perceptual understandings of speech and song and suggest that children under age 8 are still learning what features are important for categorizing utterances as speech or song.
Research Highlights Children and adults conceptually and perceptually categorize speech and song from age 4.
Listeners use F0 instability, harmonicity, spectral flux, and utterance duration to determine whether vocal stimuli sound like song.
Acoustic cue weighting changes with age, becoming adult‐like at age 8 for perceptual categorization and at age 12 for conceptual differentiation.
Young children are still learning to categorize speech and song, which leaves open the possibility that music‐ and language‐specific skills are not so domain‐specific.
-
Abstract Background Airborne viral pathogens like severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) can be encapsulated and transmitted through liquid droplets/aerosols formed during human respiratory events. Methods The number and extent of droplets/aerosols at distances between 1 and 6 ft (0.305–1.829 m) for a participant wearing no face covering, a cotton single-layer cloth face covering, and a 3-layer disposable face covering were measured for defined speech and cough events. The data include planar particle imagery to illuminate emissions by a light-sheet and local aerosol/droplet probes taken with phase Doppler interferometry and an aerodynamic particle sizer. Results Without face coverings, droplets/aerosols were detected up to a maximum of 1.25 m (4.1ft ± 0.22–0.28 ft) during speech and up to 1.37 m (4.5ft ± 0.19–0.33 ft) while coughing. The cloth face covering reduced maximum axial distances to 0.61 m (2.0 ft ± 0.11–0.15 ft) for speech and to 0.67 m (2.2 ft ± 0.02–0.20 ft) while coughing. Using the disposable face covering, safe distance was reduced further to 0.15 m (0.50 ft ± 0.01–0.03 ft) measured for both emission scenarios. In addition, the use of face coverings was highly effective in reducing the count of expelled aerosols. Conclusions The experimental study indicates that 0.914 m (3 ft) physical distancing with face coverings is equally as effective at reducing aerosol/droplet exposure as 1.829 m (6 ft) with no face covering.more » « less
-
Abstract Parental responsiveness to infant behaviors is a strong predictor of infants' language and cognitive outcomes. The mechanisms underlying this effect, however, are relatively unknown. We examined the effects of parent speech on infants' visual attention, manual actions, hand‐eye coordination, and dyadic joint attention during parent‐infant free play. We report on two studies that used head‐mounted eye trackers in increasingly naturalistic laboratory environments. In Study 1, 12‐to‐24‐month‐old infants and their parents played on the floor of a seminaturalistic environment with 24 toys. In Study 2, a different sample of dyads played in a home‐like laboratory with 10 toys and no restrictions on their movement. In both studies, we present evidence that responsive parent speech extends the duration of infants' multimodal attention. This social “boost” of parent speech impacts multiple behaviors that have been linked to later outcomes—visual attention, manual actions, hand‐eye coordination, and joint attention. Further, the amount that parents talked during the interaction was negatively related to the effects of parent speech on infant attention. Together, these results provide evidence of a trade‐off between quantity of speech and its effects, suggesting multiple pathways through which parents impact infants' multimodal attention to shape the moment‐by‐moment dynamics of an interaction.
-
Abstract This study compares how English-speaking adults and children from the United States adapt their speech when talking to a real person and a smart speaker (Amazon Alexa) in a psycholinguistic experiment. Overall, participants produced more effortful speech when talking to a device (longer duration and higher pitch). These differences also varied by age: children produced even higher pitch in device-directed speech, suggesting a stronger expectation to be misunderstood by the system. In support of this, we see that after a staged recognition error by the device, children increased pitch even more. Furthermore, both adults and children displayed the same degree of variation in their responses for whether “Alexa seems like a real person or not”, further indicating that children’s conceptualization of the system’s competence shaped their register adjustments, rather than an increased anthropomorphism response. This work speaks to models on the mechanisms underlying speech production, and human–computer interaction frameworks, providing support for routinized theories of spoken interaction with technology.