skip to main content

This content will become publicly available on December 21, 2024

Title: Lexico-syntactic constraints influence verbal working memory in sentence-like lists
We test predictions from the language emergent perspective on verbal working memory that lexico-syntactic constraints should support both item and order memory. In natural language, long-term knowledge of lexico-syntactic patterns involving part of speech, verb biases, and noun animacy support language comprehension and production. In three experiments, participants were presented with randomly generated dative-like sentences or lists in which part of speech, verb biases, and animacy of a single word were manipulated. Participants were more likely to recall words in the correct position when presented with a verb over a noun in the verb position, a good dative verb over an intransitive verb in the verb position, and an animate noun over an inanimate noun in the subject noun position. These results demonstrate that interactions between words and their context in the form of lexico-syntactic constraints influence verbal working memory.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Date Published:
Journal Name:
Memory & Cognition
Subject(s) / Keyword(s):
Verbal working memory · Long-term memory · Serial recall · Language
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Most theories of verbal working memory recognize that language comprehension and production processes play a role in word memory for familiar sequences, but not for novel lists of nouns. Some language emergent theories propose that language processes can support verbal working memory even for novel sequences. Through corpus analyses, we identify sequences of two nouns that resemble patterns in natural language, even though the sequences are novel. We present 2 experiments demonstrating better recall in college students for these novel sequences over the same words in reverse order. In a third experiment, we demonstrate better recognition of the order of these sequences over a longer time scale. These results suggest verbal working memory and recognition of order over a delay are influenced by language knowledge processes, even for novel memoranda that approximate noun lists typically employed in memory experiments.

    more » « less
  2. null (Ed.)
    Abstract Noun incorporation is commonly thought to avoid the weak compositionality of compounds because it involves conjunction of an argument noun with the incorporating verb. However, it is weakly compositional in two ways. First, the noun’s entity argument needs to be bound or saturated, but previous accounts fail to adequately ensure that it is. Second, non-arguments are often incorporated in many languages, and their thematic role is available for contextual selection. We show that these two weaknesses are actually linked. We focus on the Kiowa language, which generally bars objects from incorporation but allows non-arguments. We show that a mediating relation is required to semantically link the noun to the verb. Absent a relation, the noun’s entity argument is not saturated, and the entire expression is uninterpretable. The mediating relation for non-objects also assigns it a thematic role instead of a postposition. Speakers can choose this role freely, subject to independent constraints from the pragmatics, syntax, and semantics. Objects in Kiowa are in fact allowed to incorporate in certain environments, but we show that these all independently involve a mediating relation. The mediating relation for objects quantifies over the noun and links the noun+verb construction to the rest of the clause. The head that introduces this relation re-categorizes the verb in the syntactic derivation. Essentially, we demonstrate two distinct mechanisms for noun incorporation. Having derived the distribution of Kiowa, we apply the same relations to derive constraints on English complex verbs and synthetic compounds, which exhibit most of the same constraints as Kiowa noun incorporation. We also look at languages with routine object incorporation, and show how the transitivity of the verb depends on whether the v° head introducing the external argument assigns case to the re-categorized verb. 
    more » « less
  3. Native adult speakers of a language can produce grammatical sentences fluently, effortlessly, and with relatively few errors. These characteristics make the highly-practiced task of speaking a viable candidate for an automatic process, i.e., one independent of cognitive control. However, recent studies have suggested that some aspects of production, such as lexical retrieval and tailoring speech to an addressee, may depend on the speaker’s inhibitory control abilities. Less clear is the dependence of syntactic operations on inhibitory control processes. Using both a direct manipulation of inhibitory control demands and an analysis of individual differences, we show that one of the most common syntactic operations, producing the correct subject-verb agreement, requires inhibitory control when a singular subject noun competes with a plural local noun as in “The snake next to the purple elephants is green.”. This finding calls for the integration of inhibitory control mechanisms into models of agreement production, and more generally into theories of syntactic production. 
    more » « less
  4. Animacy is a necessary property for a referent to be an agent, and thus animacy detection is useful for a variety of natural language processing tasks, including word sense disambiguation, co-reference resolution, semantic role labeling, and others. Prior work treated animacy as a word-level property, and has developed statistical classifiers to classify words as either animate or inanimate. We discuss why this approach to the problem is ill-posed, and present a new approach based on classifying the animacy of co-reference chains. We show that simple voting approaches to inferring the animacy of a chain from its constituent words perform relatively poorly, and then present a hybrid system merging supervised machine learning (ML) and a small number of hand-built rules to compute the animacy of referring expressions and co-reference chains. This method achieves state of the art performance. The supervised ML component leverages features such as word embeddings over referring expressions, parts of speech, and grammatical and semantic roles. The rules take into consideration parts of speech and the hypernymy structure encoded in WordNet. The system achieves an F1 of 0.88 for classifying the animacy of referring expressions, which is comparable to state of the art results for classifying the animacy of words, and achieves an F1 of 0.75 for classifying the animacy of coreference chains themselves. We release our training and test dataset, which includes 142 texts (all narratives) comprising 156,154 words, 34,698 referring expressions, and 10,941 co-reference chains. We test the method on a subset of the OntoNotes dataset, showing using manual sampling that animacy classification is 90% +/- 2% accurate for coreference chains, and 92% +/- 1% for referring expressions. The data also contains 46 folktales, which present an interesting challenge because they often involve characters who are members of traditionally inanimate classes (e.g., stoves that walk, trees that talk). We show that our system is able to detect the animacy of these unusual referents with an F1 of 0.95. 
    more » « less
  5. Abstract

    Children use syntax to learn verbs, in a process known as syntactic bootstrapping. The structure‐mapping account proposes that syntactic bootstrapping begins with a universal bias to map each noun phrase in a sentence onto a participant role in a structured conceptual representation of an event. Equipped with this bias, children interpret thenumber of noun phrasesaccompanying a new verb as evidence about the semantic predicate–argument structure of the sentence, and therefore about the meaning of the verb. In this paper, we first review evidence for the structure–mapping account, and then discuss challenges to the account arising from the existence of languages that allow verbs' arguments to be omitted, such as Korean. These challenges prompt us to (a) refine our notion of the distributional learning mechanisms that create representations of sentence structure, and (b) propose that anexpectation of discourse continuityallows children to gather linguistic evidence for each verb’s arguments across sentences in a coherent discourse. Taken together, the proposed learning mechanisms and biases sketch a route whereby simple aspects of sentence structure guide verb learning from the start of multi‐word sentence comprehension, and do so even if some of the new verb’s arguments are omitted due to discourse redundancy.

    more » « less