skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Mapping language onto mental representations of object locations in transfer-of-possession events: A visual-world study using webcam-based eye-tracking
Source-goal events involve an object moving from the Source to the Goal. In this work, we focus on the representation of the object, which has received relatively less attention in the study of Source-goal events. Specifically, this study aims to investigate the mapping between language and mental representations of object locations in transfer-of-possession events (e.g. throwing, giving). We investigate two different grammatical factors that may influence the representation of object location in transfer-of-possession events: (a) grammatical aspect (e.g. threw vs. was throwing) and (b) verb semantics (guaranteed transfer, e.g. give vs. no guaranteed transfer, e.g. throw). We conducted a visual-world eye-tracking study using a novel webcam-based eye-tracking paradigm (Webgazer; Papoutsaki et al., 2016) to investigate how grammatical aspect and verb semantics in the linguistic input guide the real-time and final representations of object locations. We show that grammatical cues guide the real-time and final representations of object locations.  more » « less
Award ID(s):
2041261
PAR ID:
10409452
Author(s) / Creator(s):
;
Editor(s):
Culbertson, J.; Perfors, A.; Rabagliati, H.; Ramenzoni, V.
Date Published:
Journal Name:
Proceedings of the Annual Meeting of the Cognitive Science Society
Volume:
44
Issue:
44
Page Range / eLocation ID:
1270-1276
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Acquisition of natural language has been shown to fundamentally impact both one’s ability to use the first language and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later—a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range = 28–58 years) with early (0–3 years) or later (4–7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject–object–verb [SOV] vs. object–subject–verb [OSV]) were examined in sentences that included (1) simple sentences, (2) topicalized sentences, and (3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure. 
    more » « less
  2. This paper investigates the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence. We introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical from published linguistics literature. As baselines, we train several recurrent neural network models on acceptability classification, and find that our models outperform unsupervised models by Lau et al. (2016) on CoLA. Error-analysis on specific grammatical phenomena reveals that both Lau et al.’s models and ours learn systematic generalizations like subject-verb-object order. However, all models we test perform far below human level on a wide range of grammatical constructions. 
    more » « less
  3. null (Ed.)
    Languages typically provide more than one grammatical construction to express certain types of messages. A speaker’s choice of construction is known to depend on multiple factors, including the choice of main verb – a phenomenon known as verb bias. Here we introduce DAIS, a large benchmark dataset containing 50K human judgments for 5K distinct sentence pairs in the English dative alternation. This dataset includes 200 unique verbs and systematically varies the definiteness and length of arguments. We use this dataset, as well as an existing corpus of naturally occurring data, to evaluate how well recent neural language models capture human preferences. Results show that larger models perform better than smaller models, and transformer architectures (e.g. GPT-2) tend to out-perform recurrent architectures (e.g. LSTMs) even under comparable parameter and training settings. Additional analyses of internal feature representations suggest that transformers may better integrate specific lexical information with grammatical constructions. 
    more » « less
  4. Human brains grasp the gists of visual scenes from a single glance, but to what extent is this possible for language? While we typically think of language in terms of sequential speech, our everyday experience involves numerous rapidly flashing written notifications, which we understand instantly. What do our brains detect in the first few hundred milliseconds after seeing such a stimulus? We flashed short sentences during magnetoencephalography measurement, revealing sentence-sensitive neural activity in left temporal cortex within 130 milliseconds. These signals emerged for subject-verb-object sentences regardless of grammatical or semantic well-formedness, suggesting that at-a-glance language comprehension begins by detecting basic phrase structure, independent of meaning or other grammatical details. Our findings unveil one aspect of how our brains process information rapidly in today’s visually saturated world. 
    more » « less
  5. We study adversarially robust transfer learning, wherein, given labeled data on multiple (source) tasks, the goal is to train a model with small robust error on a previously unseen (target) task. In particular, we consider a multi-task representation learning (MTRL) setting, i.e., we assume that the source and target tasks admit a simple (linear) predictor on top of a shared representation (e.g., the final hidden layer of a deep neural network). In this general setting, we provide rates on the excess adversarial (transfer) risk for Lipschitz losses and smooth nonnegative losses. These rates show that learning a representation using adversarial training on diverse tasks helps protect against inference-time attacks in data-scarce environments. Additionally, we provide novel rates for the single-task setting. 
    more » « less