skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Mapping language onto mental representations of object locations in transfer-of-possession events: A visual-world study using webcam-based eye-tracking
Source-goal events involve an object moving from the Source to the Goal. In this work, we focus on the representation of the object, which has received relatively less attention in the study of Source-goal events. Specifically, this study aims to investigate the mapping between language and mental representations of object locations in transfer-of-possession events (e.g. throwing, giving). We investigate two different grammatical factors that may influence the representation of object location in transfer-of-possession events: (a) grammatical aspect (e.g. threw vs. was throwing) and (b) verb semantics (guaranteed transfer, e.g. give vs. no guaranteed transfer, e.g. throw). We conducted a visual-world eye-tracking study using a novel webcam-based eye-tracking paradigm (Webgazer; Papoutsaki et al., 2016) to investigate how grammatical aspect and verb semantics in the linguistic input guide the real-time and final representations of object locations. We show that grammatical cues guide the real-time and final representations of object locations.  more » « less
Award ID(s):
2041261
PAR ID:
10409452
Author(s) / Creator(s):
;
Editor(s):
Culbertson, J.; Perfors, A.; Rabagliati, H.; Ramenzoni, V.
Date Published:
Journal Name:
Proceedings of the Annual Meeting of the Cognitive Science Society
Volume:
44
Issue:
44
Page Range / eLocation ID:
1270-1276
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Acquisition of natural language has been shown to fundamentally impact both one’s ability to use the first language and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later—a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range = 28–58 years) with early (0–3 years) or later (4–7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject–object–verb [SOV] vs. object–subject–verb [OSV]) were examined in sentences that included (1) simple sentences, (2) topicalized sentences, and (3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure. 
    more » « less
  2. This paper investigates the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence. We introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical from published linguistics literature. As baselines, we train several recurrent neural network models on acceptability classification, and find that our models outperform unsupervised models by Lau et al. (2016) on CoLA. Error-analysis on specific grammatical phenomena reveals that both Lau et al.’s models and ours learn systematic generalizations like subject-verb-object order. However, all models we test perform far below human level on a wide range of grammatical constructions. 
    more » « less
  3. null (Ed.)
    Languages typically provide more than one grammatical construction to express certain types of messages. A speaker’s choice of construction is known to depend on multiple factors, including the choice of main verb – a phenomenon known as verb bias. Here we introduce DAIS, a large benchmark dataset containing 50K human judgments for 5K distinct sentence pairs in the English dative alternation. This dataset includes 200 unique verbs and systematically varies the definiteness and length of arguments. We use this dataset, as well as an existing corpus of naturally occurring data, to evaluate how well recent neural language models capture human preferences. Results show that larger models perform better than smaller models, and transformer architectures (e.g. GPT-2) tend to out-perform recurrent architectures (e.g. LSTMs) even under comparable parameter and training settings. Additional analyses of internal feature representations suggest that transformers may better integrate specific lexical information with grammatical constructions. 
    more » « less
  4. Human brains grasp the gists of visual scenes from a single glance, but to what extent is this possible for language? While we typically think of language in terms of sequential speech, our everyday experience involves numerous rapidly flashing written notifications, which we understand instantly. What do our brains detect in the first few hundred milliseconds after seeing such a stimulus? We flashed short sentences during magnetoencephalography measurement, revealing sentence-sensitive neural activity in left temporal cortex within 130 milliseconds. These signals emerged for subject-verb-object sentences regardless of grammatical or semantic well-formedness, suggesting that at-a-glance language comprehension begins by detecting basic phrase structure, independent of meaning or other grammatical details. Our findings unveil one aspect of how our brains process information rapidly in today’s visually saturated world. 
    more » « less
  5. Abstract Dialect differences between African American English (AAE) and Mainstream American English (MAE) impact how children comprehend sentences. However, research on real-time sentence processing has the potential to reveal the underlying causes of these differences. This study used eye tracking, which measures how children interpret linguistic features as a sentence unfolds, and examined how AAE- and MAE-speaking children processed “was” and “were,” a morphology feature produced differently in MAE and AAE. Fifty-nine participants, ages 7;8 to 11;0 years, completed standardized measures of dialect density and receptive vocabulary. In the eye tracking task, participants heard sentences in MAE with either unambiguous (e.g., “Jeremiah”) or ambiguous (e.g., “Carolyn May”), subjects and eye movements were measured to singular (image of one person) or plural referents (image of two people). After the onset of the auxiliary verb, AAE-speaking children were sensitive to “was” and “were” when processing sentences but were less likely than MAE-speaking children to use “was” as a basis for updating initial predictions of plural referents. Among African American children, dialect density was predictive of sensitivity to “was” when processing sentences. Results suggest that linguistic mismatch impacts how contrastive verb morphology is used to update initial interpretations of MAE sentences. 
    more » « less