skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2020969

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract What makes a word easy to learn? Early‐learned words are frequent and tend to name concrete referents. But words typically do not occur in isolation. Some words are predictable from their contexts; others are less so. Here, we investigate whether predictability relates to when children start producing different words (age of acquisition; AoA). We operationalized predictability in terms of a word's surprisal in child‐directed speech, computed using n‐gram and long‐short‐term‐memory (LSTM) language models. Predictability derived from LSTMs was generally a better predictor than predictability derived from n‐gram models. Across five languages, average surprisal was positively correlated with the AoA of predicates and function words but not nouns. Controlling for concreteness and word frequency, more predictable predicates and function words were learned earlier. Differences in predictability between languages were associated with cross‐linguistic differences in AoA: the same word (when it was a predicate) was produced earlier in languages where the word was more predictable. 
    more » « less
  2. Abstract There are towns in which language-of-thought (LoT) is the best game. But do we live in one? I go through three properties that characterize the LoT hypothesis: Discrete constituents, role-filler independence, and logical operators, and argue that in each case predictions from the LoT hypothesis are a poor fit to actual human cognition. As a hypothesis of what human cognition ought to be like, LoT departs from empirical reality. 
    more » « less
  3. Abstract Across languages, words carve up the world of experience in different ways. For example, English lacks an equivalent to the Chinese superordinate noun tiáowèipǐn, which is loosely translated as “ingredients used to season food while cooking.” Do such differences matter? A conventional label may offer a uniquely effective way of communicating. On the other hand, lexical gaps may be easily bridged by the compositional power of language. After all, most of the ideas we want to express do not map onto simple lexical forms. We conducted a referential Director/Matcher communication task with adult speakers of Chinese and English. Directors provided a clue that Matchers used to select words from a word grid. The three target words corresponded to a superordinate term (e.g., beverages) in either Chinese or English but not both. We found that Matchers were more accurate at choosing the target words when their language lexicalized the target category. This advantage was driven entirely by the Directors’ use/non-use of the intended superordinate term. The presence of a conventional superordinate had no measurable effect on speakers’ within- or between-category similarity ratings. These results show that the ability to rely on a conventional term is surprisingly important despite the flexibility languages offer to communicate about non-lexicalized categories. 
    more » « less
  4. Nölle, J; Raviv, L; Graham, E; Hartmann, S; Jadoul, Y; Josserand, M; Matzinger, T; Mudd, K; Pleyer, M; Slonimska, A (Ed.)
    Successful communication is thought to require members of a speech community to learn common mappings between words and their referents. But if one person’s concept of CAR is very different from another person’s, successful communication might fail despite the common mappings because different people would mean different things by the same word. Here we investigate the possibility that one source of representational alignment is language itself. We report a series of neural network simulations investigating how representational alignment changes as a function of agents having more or less similar visual experiences (overlap in “visual diet”) and how it changes with exposure to category names. We find that agents with more similar visual experiences have greater representational overlap. However, the presence of category labels not only increases representational overlap, but also greatly reduces the importance of having similar visual experiences. The results suggest that ensuring representational alignment may be one of language’s evolved functions. 
    more » « less
    Free, publicly-accessible full text available October 9, 2025
  5. It is often assumed that how we talk about the world matters a great deal. This is one reason why conceptual engineers seek to improve our linguistic practices by advocating novel uses of our words, or by inventing new ones altogether. A core idea shared by conceptual engineers is that by changing our language in this way, we can reap all sorts of cognitive and practical benefits, such as improving our theorizing, combating hermeneutical injustice, or promoting social emancipation. But how do changes at the linguistic level translate into any of these worthwhile benefits? In this paper, we propose the nameability account as a novel answer to this question. More specifically, we argue that what linguistic resources are readily available to us directly affects our cognitive performance on various categorization‐related tasks. Consequently, our performance on such tasks can be improved by making controlled changes to our linguistic resources. We argue that this account supports and extends recent motivations for conceptual engineering, as categorization plays an important role in both theoretical and practical contexts. 
    more » « less
    Free, publicly-accessible full text available September 15, 2025
  6. Tulving characterized semantic memory as a vast repository of meaning that underlies language and many other cognitive processes. This perspective on lexical and conceptual knowledge galvanized a new era of research undertaken by numerous fields, each with their own idiosyncratic methods and terminology. For example, “concept” has different meanings in philosophy, linguistics, and psychology. As such, many fundamental constructs used to delineate semantic theories remain underspecified and/or opaque. Weak construct specificity is among the leading causes of the replication crisis now facing psychology and related fields. Term ambiguity hinders cross-disciplinary communication, falsifiability, and incremental theory-building. Numerous cognitive subdisciplines (e.g., vision, affective neuroscience) have recently addressed these limitations via the development of consensus-based guidelines and definitions. The project to follow represents our effort to produce a multidisciplinary semantic glossary consisting of succinct definitions, background, principled dissenting views, ratings of agreement, and subjective confidence for 17 target constructs (e.g., abstractness, abstraction, concreteness, concept, embodied cognition, event semantics, lexical-semantic, modality, representation, semantic control, semantic feature, simulation, semantic distance, semantic dimension).We discuss potential benefits and pitfalls (e.g., implicit bias, prescriptiveness) of these efforts to specify a common nomenclature that other researchers might index in specifying their own theoretical perspectives (e.g., They said X, but I mean Y). 
    more » « less
    Free, publicly-accessible full text available September 4, 2025
  7. Free, publicly-accessible full text available September 3, 2025
  8. Nölle, J; Raviv, L; Graham, E; Hartmann, S; Jadoul, Y; Josserand, M; Matzinger, T; Mudd, K; Pleyer, M; Slonimska, A (Ed.)
    Generic statements like “tigers are striped” and “cars have radios” com- municate information that is, in general, true. However, while the first state- ment is true *in principle*, the second is true only statistically. People are exquisitely sensitive to this principled-vs-statistical distinction. It has been argued that this ability to distinguish between something being true by virtue of it being a category member versus being true because of mere statistical regularity, is a general property of people’s conceptual machinery and cannot itself be learned. We investigate whether the distinction between principled and statistical properties can be learned from language itself. If so, it raises the possibility that language experience can bootstrap core conceptual dis- tinctions and that it is possible to learn sophisticated causal models directly from language. We find that language models are all sensitive to statistical prevalence, but struggle with representing the principled-vs-statistical dis- tinction controlling for prevalence. Until GPT-4, which succeeds. 
    more » « less
    Free, publicly-accessible full text available July 8, 2025
  9. It is commonly assumed that inner speech—the experience of thought as occurring in a natural language—is a human universal. Recent evidence, however, suggests that the experience of inner speech in adults varies from near constant to nonexistent. We propose a name for a lack of the experience of inner speech—anendophasia—and report four studies examining some of its behavioral consequences. We found that adults who reported low levels of inner speech ( N = 46) had lower performance on a verbal working memory task and more difficulty performing rhyme judgments compared with adults who reported high levels of inner speech ( N = 47). Task-switching performance—previously linked to endogenous verbal cueing—and categorical effects on perceptual judgments were unrelated to differences in inner speech. 
    more » « less
    Free, publicly-accessible full text available July 1, 2025
  10. Nölle, J; Raviv, L; Graham, E; Hartmann, S; Jadoul, Y; Josserand, M; Matzinger, T; Mudd, K; Pleyer, M; Slonimska, A (Ed.)
    Why are some words more frequent than others? Surprisingly, the obvious answers to this seemingly simple question, e.g., that frequent words reflect greater communicative needs, are either wrong or incomplete. We show that a word’s frequency is strongly associated with its position in a semantic association network. More centrally located words are more frequent. But is a word’s centrality in a network merely a reflection of inherent centrality of the word’s meaning? Through cross-linguistic comparisons, we found that differences in the frequency of translation-equivalents are predicted by differences in the word’s network structures in the different languages. Specifically, frequency was linked to how many connections a word had and to its capacity to bridge words that are typically not linked. This hints that a word’s frequency (and with it, its meaning) may change as a function of the word’s association with other words. 
    more » « less
    Free, publicly-accessible full text available July 1, 2025