skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Actuation Problem
The actuation problem asks why a linguistic change occurs in a particular language at a particular time and space. Responses to this problem are multifaceted. This review approaches the problem of actuation through the lens of sound change, examining it from both individual and population perspectives. Linguistic changes ultimately actuate in the form of idiolectal differences. An understanding of language change actuation at the idiolectal level requires an understanding of ( a) how individual speaker-listeners’ different past linguistic experiences and physical, perceptual, cognitive, and social makeups affect the way they process and analyze the primary learning data and ( b) how these factors lead to divergent representations and grammars across speakers-listeners. Population-level incrementation and propagation of linguistic innovation depend not only on the nature of contact between speakers with unique idiolects but also on individuals who have the wherewithal to take advantage of the linguistic innovations they encountered to achieve particular ideological projects at any given moment. Because of the vast number of contingencies that need to be aligned properly, the incrementation and propagation of linguistic innovation are predicted to be rare. Agent-based modeling promises to provide a controlled way to investigate the stochastic nature of language change propagation, but a comprehensive model of linguistic change actuation at the individual level remains elusive.  more » « less
Award ID(s):
1827409
PAR ID:
10519195
Author(s) / Creator(s):
Publisher / Repository:
Annual Review
Date Published:
Journal Name:
Annual Review of Linguistics
Volume:
9
Issue:
1
ISSN:
2333-9683
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract A large percentage of the world’s languages – anywhere from 50 to 90% – are currently spoken in what we call shift ecologies, situations of unstable bi- or multilingualism where speakers, and in particular younger speakers, do not use their ancestral language but rather speak the majority language. The present paper addresses several interrelated questions with regard to the linguistic effects of bilingualism in such shift ecologies. These language ecologies are dynamic: language choices and preferences change, as do speakers’ proficiency levels. One result is multiple kinds of variation in these endangered language communities. Understanding change and shift requires a methodology for establishing a baseline; descriptive grammars rarely provide information about usage and multilingual language practices. An additional confounder is a range of linguistic variation: regional (dialectal); generational (language-internal change without contact or shift); contact-based (contact with or without shift); and proficiency-based (variation which develops as a result of differing levels of input and usage). Widespread, ongoing language shift today provides opportunities to examine the linguistic changes exhibited by shifting speakers, that is, to zero in on language change and loss in process, rather than as an end product. 
    more » « less
  2. Abstract Multilingual speakers can find speech recognition in everyday environments like restaurants and open-plan offices particularly challenging. In a world where speaking multiple languages is increasingly common, effective clinical and educational interventions will require a better understanding of how factors like multilingual contexts and listeners’ language proficiency interact with adverse listening environments. For example, word and phrase recognition is facilitated when competing voices speak different languages. Is this due to a “release from masking” from lower-level acoustic differences between languages and talkers, or higher-level cognitive and linguistic factors? To address this question, we created a “one-man bilingual cocktail party” selective attention task using English and Mandarin speech from one bilingual talker to reduce low-level acoustic cues. In Experiment 1, 58 listeners more accurately recognized English targets when distracting speech was Mandarin compared to English. Bilingual Mandarin–English listeners experienced significantly more interference and intrusions from the Mandarin distractor than did English listeners, exacerbated by challenging target-to-masker ratios. In Experiment 2, 29 Mandarin–English bilingual listeners exhibited linguistic release from masking in both languages. Bilinguals experienced greater release from masking when attending to English, confirming an influence of linguistic knowledge on the “cocktail party” paradigm that is separate from primarily energetic masking effects. Effects of higher-order language processing and expertise emerge only in the most demanding target-to-masker contexts. The “one-man bilingual cocktail party” establishes a useful tool for future investigations and characterization of communication challenges in the large and growing worldwide community of Mandarin–English bilinguals. 
    more » « less
  3. Language learning is a complex issue of interest to linguists, computer scientists, and psychologists alike. While the different fields approach these questions at different levels of granularity, findings in one field profoundly affect how the others proceed. My dissertation examines the perceptual and linguistic generalizations regarding the units that make up words (phonemes, morphemes, and vocal quality) in Polish and English to better understand how both humans and computers formulate these concepts in language. I use computational modeling and machine learning to investigate Polish morphophonology in two ways. First, I examine consonant clusters at the beginning of Polish words to see what parameters determine human-like learnability, compared to a survey of native speakers. I run several studies to compare learning with gradient or categorical data, each at the cluster, bigram, and featural level. Second, I examine Polish yer alternation and study whether machine learning approaches can generalize morphophonological information to target this pattern when given a larger Polish. Using low level neural networks and a classification-and-regression tree (CART) decision algorithm, I examine how well they use morphological and phonological information to make generalizations that capture a small subset of the Polish vocabulary. Additionally, I conduct a psycholinguistic experiment with English speakers to further establish what level of attention listeners may give when building phonological representations. I test this by extending a previous study finding that real word primes make rejection of nonword primes more difficult, determining that the effect generalizes across speakers. This research addresses a tension in modeling the computational problem of language learning between the formalization of representation and the mechanics of the learning apparatus. Different levels of abstraction can give more sophisticated insight into the data at hand, but at a cost that may not be representative of human learning. I argue that computational linguistic questions such as these provide an interesting window into the strengths and limitations of machine learning questions as compared to the human language learning faculty. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.] ERIC # ED663172 
    more » « less
  4. It is now well established that memory representations of words are acoustically rich. Alongside this development, a related line of work has shown that the robustness of memory encoding varies widely depending on who is speaking. In this dissertation, I explore the cognitive basis of memory asymmetries at a larger linguistic level (spoken sentences), using the mechanism of socially guided attention allocation to explain how listeners dynamically shift cognitive resources based on the social characteristics of speech. This dissertation consists of three empirical studies designed to investigate the factors that pattern asymmetric memory for spoken language. In the first study, I explored specificity effects at the level of the sentence. While previous research on specificity has centralized the lexical item as the unit of study, I showed that talker-specific memory patterns are also robust at a larger linguistic level, making it likely that acoustic detail is fundamental to human speech perception more broadly. In the second study, I introduced a set of diverse talkers and showed that memory patterns vary widely within this group, and that the memorability of individual talkers is somewhat consistent across listeners. In the third study, I showed that memory behaviors do not depend merely on the speech characteristics of the talker or on the content of the sentence, but on the unique relationship between these two. Memory dramatically improved when semantic content of sentences was congruent with widely held social associations with talkers based on their speech, and this effect was particularly pronounced when listeners had a high cognitive load during encoding. These data collectively provide evidence that listeners allocate attentional resources on an ad hoc, socially guided basis. Listeners subconsciously draw on fine-grained phonetic information and social associations to dynamically adapt low-level cognitive processes while understanding spoken language and encoding it to memory. This approach positions variation in speech not as an obstacle to perception, but as an information source that humans readily recruit to aid in the seamless understanding of spoken language. 
    more » « less
  5. Human listeners are better at telling apart speakers of their native language than speakers of other languages, a phenomenon known as the language familiarity effect. The recent observation of such an effect in infants as young as 4.5 months of age (Fecher & Johnson, in press) has led to new difficulties for theories of the effect. On the one hand, retaining classical accounts—which rely on sophisticated knowledge of the native language (Goggin, Thompson, Strube, & Simental, 1991)–requires an explanation of how infants could acquire this knowledge so early. On the other hand, letting go of these accounts requires an explanation of how the effect could arise in the absence of such knowledge. In this paper, we build on algorithms from unsupervised machine learning and zero-resource speech technology to propose, for the first time, a feasible acquisition mechanism for the language familiarity effect in infants. Our results show how, without relying on sophisticated linguistic knowledge, infants could develop a language familiarity effect through statistical modeling at multiple time-scales of the acoustics of the speech signal to which they are exposed. 
    more » « less