skip to main content


Search for: All records

Award ID contains: 1749407

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    The “binding problem” has been a central question in vision science for some 30 years: When encoding multiple objects or maintaining them in working memory, how are we able to represent the correspondence between a specific feature and its corresponding object correctly? In this letter we argue that the boundaries of this research program in fact extend far beyond vision, and we call for coordinated pursuit across the broader cognitive science community of this central question for cognition, which we dub “Binding Problem 2.0”.

     
    more » « less
  2. Abstract

    Partial speech input is often understood to trigger rapid and automatic activation of successively higher-level representations of words, from sound to meaning. Here we show evidence from magnetoencephalography that this type of incremental processing is limited when words are heard in isolation as compared to continuous speech. This suggests a less unified and automatic word recognition process than is often assumed. We present evidence from isolated words that neural effects of phoneme probability, quantified by phoneme surprisal, are significantly stronger than (statistically null) effects of phoneme-by-phoneme lexical uncertainty, quantified by cohort entropy. In contrast, we find robust effects of both cohort entropy and phoneme surprisal during perception of connected speech, with a significant interaction between the contexts. This dissociation rules out models of word recognition in which phoneme surprisal and cohort entropy are common indicators of a uniform process, even though these closely related information-theoretic measures both arise from the probability distribution of wordforms consistent with the input. We propose that phoneme surprisal effects reflect automatic access of a lower level of representation of the auditory input (e.g., wordforms) while the occurrence of cohort entropy effects is task sensitive, driven by a competition process or a higher-level representation that is engaged late (or not at all) during the processing of single words.

     
    more » « less
  3. Abstract

    Sustained anterior negativities have been the focus of much neurolinguistics research concerned with the language-memory interface, but what neural computations do they actually reflect? During the comprehension of sentences with long-distance dependencies between elements (such as object wh-questions), prior event-related potential work has demonstrated sustained anterior negativities (SANs) across the dependency region. SANs have been traditionally interpreted as an index of working memory resources responsible for storing the first element (e.g., wh-phrase) until the second element (e.g., verb) is encountered and the two can be integrated. However, it is also known that humans pursue top-down approaches in processing long-distance dependencies—predicting units and structures before actually encountering them. This study tests the hypothesis that SANs are a more general neural index of syntactic prediction. Across three experiments, we evaluated SANs in traditional wh-dependency contrasts, but also in sentences in which subordinating adverbials (e.g., although) trigger a prediction for a second clause, compared to temporal adverbials (e.g., today) that do not. We find no SAN associated with subordinating adverbials, contra the syntactic prediction hypothesis. More surprisingly, we observe SANs across matrix questions but not embedded questions. Since both involved identical long-distance dependencies, these results are also inconsistent with the traditional syntactic working memory account of the SAN. We suggest that a more general hypothesis that sustained neural activity supports working memory can be maintained, however, if the sustained anterior negativity reflects working memory encoding at the non-linguistic discourse representation level, rather than at the sentence level.

     
    more » « less
  4. Event concepts of common verbs (e.g. eat, sleep) can be broadly shared across languages, but a given language’s rules for subcategorization are largely arbitrary and vary substantially across languages. When subcategorization information does not match between first language (L1) and second language (L2), how does this mismatch impact L2 speakers in real time? We hypothesized that subcategorization knowledge in L1 is particularly difficult for L2 speakers to override online. Event-related potential (ERP) responses were recorded from English sentences that include verbs that were ambitransitive in Mandarin but intransitive in English (*  My sister listened the music). While L1 English speakers showed a prominent P600 effect to subcategorization violations, L2 English speakers whose L1 was Mandarin showed some sensitivity in offline responses but not in ERPs. This suggests that computing verb–argument relations, although seemingly one of the basic components of sentence comprehension, in fact requires accessing lexical syntax which may be vulnerable to L1 interference in L2. However, our exploratory analysis showed that more native-like behavioral accuracy was associated with a more native-like P600 effect, suggesting that, with enough experience, L2 speakers can ultimately overcome this interference.

     
    more » « less
    Free, publicly-accessible full text available October 12, 2024
  5. In standard models of language production or comprehension, the elements which are retrieved from memory and combined into a syntactic structure are “lemmas” or “lexical items.” Such models implicitly take a “lexicalist” approach, which assumes that lexical items store meaning, syntax, and form together, that syntactic and lexical processes are distinct, and that syntactic structure does not extend below the word level. Across the last several decades, linguistic research examining a typologically diverse set of languages has provided strong evidence against this approach. These findings suggest that syntactic processes apply both above and below the “word” level, and that both meaning and form are partially determined by the syntactic context. This has significant implications for psychological and neurological models of language processing as well as for the way that we understand different types of aphasia and other language disorders. As a consequence of the lexicalist assumptions of these models, many kinds of sentences that speakers produce and comprehend—in a variety of languages, including English—are challenging for them to account for. Here we focus on language production as a case study. In order to move away from lexicalism in psycho- and neuro-linguistics, it is not enough to simply update the syntactic representations of words or phrases; the processing algorithms involved in language production are constrained by the lexicalist representations that they operate on, and thus also need to be reimagined. We provide an overview of the arguments against lexicalism, discuss how lexicalist assumptions are represented in models of language production, and examine the types of phenomena that they struggle to account for as a consequence. We also outline what a non-lexicalist alternative might look like, as a model that does not rely on a lemma representation, but instead represents that knowledge as separate mappings between (a) meaning and syntax and (b) syntax and form, with a single integrated stage for the retrieval and assembly of syntactic structure. By moving away from lexicalist assumptions, this kind of model provides better cross-linguistic coverage and aligns better with contemporary syntactic theory.

     
    more » « less
  6. Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence-level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictive models in nonidentical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition. 
    more » « less
  7. null (Ed.)
  8. How quickly can verb-argument relations be computed to impact predictions of a subsequent argument? We take advantage of the substantial differences in verb-argument structure provided by Mandarin, whose compound verbs encode complex event relations, such as resultatives (Kid bit-broke lip: the kid bit his lip such that it broke) and coordinates (Store owner hit-scolded employee: the store owner hit and scolded an employee). We tested sentences in which the object noun could be predicted on the basis of the preceding compound verb, and used N400 responses to the noun to index successful prediction. By varying the delay between verb and noun, we show that prediction is delayed in the resultative context (broken-BY-biting) relative to the coordinate one (hitting-AND-scolding). These results present a first step towards temporally dissociating the fine-grained subcomputations required to parse and interpret verb- argument relations. 
    more » « less