skip to main content


Title: Towards Understanding Sustained Neural Activity Across Syntactic Dependencies
Abstract

Sustained anterior negativities have been the focus of much neurolinguistics research concerned with the language-memory interface, but what neural computations do they actually reflect? During the comprehension of sentences with long-distance dependencies between elements (such as object wh-questions), prior event-related potential work has demonstrated sustained anterior negativities (SANs) across the dependency region. SANs have been traditionally interpreted as an index of working memory resources responsible for storing the first element (e.g., wh-phrase) until the second element (e.g., verb) is encountered and the two can be integrated. However, it is also known that humans pursue top-down approaches in processing long-distance dependencies—predicting units and structures before actually encountering them. This study tests the hypothesis that SANs are a more general neural index of syntactic prediction. Across three experiments, we evaluated SANs in traditional wh-dependency contrasts, but also in sentences in which subordinating adverbials (e.g., although) trigger a prediction for a second clause, compared to temporal adverbials (e.g., today) that do not. We find no SAN associated with subordinating adverbials, contra the syntactic prediction hypothesis. More surprisingly, we observe SANs across matrix questions but not embedded questions. Since both involved identical long-distance dependencies, these results are also inconsistent with the traditional syntactic working memory account of the SAN. We suggest that a more general hypothesis that sustained neural activity supports working memory can be maintained, however, if the sustained anterior negativity reflects working memory encoding at the non-linguistic discourse representation level, rather than at the sentence level.

 
more » « less
Award ID(s):
1749407
NSF-PAR ID:
10362976
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
DOI PREFIX: 10.1162
Date Published:
Journal Name:
Neurobiology of Language
Volume:
3
Issue:
1
ISSN:
2641-4368
Page Range / eLocation ID:
p. 87-108
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper studies the task of comparative preference classification (CPC). Given two entities in a sentence, our goal is to classify whether the first (or the second) entity is preferred over the other or no comparison is expressed at all between the two entities. Existing works either do not learn entity-aware representations well and fail to deal with sentences involving multiple entity pairs or use sequential modeling approaches that are unable to capture long-range dependencies between the entities. Some also use traditional machine learning approaches that do not generalize well. This paper proposes a novel Entity-aware Dependency-based Deep Graph Attention Network (ED-GAT) that employs a multi-hop graph attention over a dependency graph sentence representation to leverage both the semantic information from word embeddings and the syntactic information from the dependency graph to solve the problem. Empirical evaluation shows that the proposed model achieves the state-of-the-art performance in comparative preference classification. 
    more » « less
  2. This study investigates the processing of wh-dependencies in English by native speakers and advanced Mandarin Chinese-speaking learners. We examined processing at a filled gap site that was in a licit position (non-island) or located inside an island, a grammatically unlicensed position. Natives showed N400 in the non-island condition, which we take as evidence of gap prediction; no N400 emerged within the island. Learners yielded P600 in the non-island condition, suggesting learners did not predict a gap, but rather experienced syntactic integration difficulty. Like natives, learners showed no effects inside the island. Island sensitivity was also observed for both natives and learners in an offline acceptability judgment task. We also explored whether event-related potentials (ERP) responses were related to attentional control (AC), a cognitive ability that has been related to predictive processing in native speakers, in order to examine whether variability in processing in learners and native speakers is similarly explained. Results showed that increased AC was associated with larger N400s for natives and larger P600s for learners in the non-island condition, suggesting that increased AC may be related to prediction for natives and to integration effort for learners. Overall, learners demonstrated island sensitivity offline and online, suggesting that second language (L2) processing is indeed grammatically-guided. However, ERP results suggest that predictive processing in the resolution of wh-dependencies may be limited, at least for learners whose first language (L1) does not instantiate overt wh-movement.

     
    more » « less
  3. null (Ed.)
    This paper studies the task of Relation Extraction (RE) that aims to identify the semantic relations between two entity mentions in text. In the deep learning models for RE, it has been beneficial to incorporate the syntactic structures from the dependency trees of the input sentences. In such models, the dependency trees are often used to directly structure the network architectures or to obtain the dependency relations between the word pairs to inject the syntactic information into the models via multi-task learning. The major problem with these approaches is the lack of generalization beyond the syntactic structures in the training data or the failure to capture the syntactic importance of the words for RE. In order to overcome these issues, we propose a novel deep learning model for RE that uses the dependency trees to extract the syntax-based importance scores for the words, serving as a tree representation to introduce syntactic information into the models with greater generalization. In particular, we leverage Ordered-Neuron Long-Short Term Memory Networks (ON-LSTM) to infer the model-based importance scores for RE for every word in the sentences that are then regulated to be consistent with the syntax-based scores to enable syntactic information injection. We perform extensive experiments to demonstrate the effectiveness of the proposed method, leading to the state-of-the-art performance on three RE benchmark datasets. 
    more » « less
  4. null (Ed.)
    This paper describes the development of the first Universal Dependencies (UD) treebank for St. Lawrence Island Yupik, an endangered language spoken in the Bering Strait region. While the UD guidelines provided a general framework for our annotations, language-specific decisions were made necessary by the rich morphology of the polysynthetic language. Most notably, we annotated a corpus at the morpheme level as well as the word level. The morpheme level annotation was conducted using an existing morphological analyzer and manual disambiguation. By comparing the two resulting annotation schemes, we argue that morpheme-level annotation is essential for polysynthetic languages like St. Lawrence Island Yupik. Word-level annotation results in degenerate trees for some Yupik sentences and often fails to capture syntactic relations that can be manifested at the morpheme level. Dependency parsing experiments provide further support for morpheme-level annotation. Implications for UD annotation of other polysynthetic languages are discussed. 
    more » « less
  5. We introduce a graph polynomial that distinguishes tree structures to represent dependency grammar and a measure based on the polynomial representation to quantify syntax similarity. The polynomial encodes accurate and comprehensive information about the dependency structure and dependency relations of words in a sentence, which enables in-depth analysis of dependency trees with data analysis tools. We apply the polynomial-based methods to analyze sentences in the ParallelUniversal Dependencies treebanks. Specifically, we compare the syntax of sentences and their translations in different languages, and we perform a syntactic typology study of available languages in the Parallel Universal Dependencies treebanks. We also demonstrate and discuss the potential of the methods in measuring syntax diversity of corpora. 
    more » « less