skip to main content


Title: The lexical and grammatical sources of neg-raising inferences
We investigate neg(ation)-raising inferences, wherein negation on a predicate can be interpreted as though in that predicate’s subordinate clause. To do this, we collect a largescale dataset of neg-raising judgments for effectively all English clause-embedding verbs and develop a model to jointly induce the semantic types of verbs and their subordinate clauses and the relationship of these types to neg-raising inferences. We find that some neg-raising inferences are attributable to properties of particular predicates, while others are attributable to subordinate clause structure.  more » « less
Award ID(s):
1748969
NSF-PAR ID:
10176035
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the Society for Computation in Linguistics
Volume:
3
Issue:
1
Page Range / eLocation ID:
220-233
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We investigate which patterns of lexically triggered doxastic, bouletic, neg(ation)-raising, and veridicality inferences are (un)attested across clause-embedding verbs in English. To carry out this investigation, we use a multiview mixed effects mixture model to discover the inference patterns captured in three lexicon-scale inference judgment datasets: two existing datasets, MegaVeridicality and MegaNegRaising, which capture veridicality and neg-raising inferences across a wide swath of the English clause-embedding lexicon, and a new dataset, MegaIntensionality, which similarly captures doxastic and bouletic inferences. We focus in particular on inference patterns that are correlated with morphosyntactic distribution, as determined by how well those patterns predict the acceptability judgments in the MegaAcceptability dataset. We find that there are 15 such patterns attested. Similarities among these patterns suggest the possibility of underlying lexical semantic components that give rise to them. We use principal component analysis to discover these components and suggest generalizations that can be derived from them. 
    more » « less
  2. null (Ed.)
    We investigate which patterns of lexically triggered doxastic, bouletic, neg(ation)-raising, and veridicality inferences are (un)attested across clause-embedding verbs in English. To carry out this investigation, we use a multiview mixed effects mixture model to discover the inference patterns captured in three lexicon-scale inference judgment datasets: two existing datasets, MegaVeridicality and MegaNegRaising, which capture veridicality and neg-raising inferences across a wide swath of the English clause-embedding lexicon, and a new dataset, MegaIntensionality, which similarly captures doxastic and bouletic inferences. We focus in particular on inference patterns that are correlated with morphosyntactic distribution, as determined by how well those patterns predict the acceptability judgments in the MegaAcceptability dataset. We find that there are 15 such patterns attested. Similarities among these patterns suggest the possibility of underlying lexical semantic components that give rise to them. We use principal component analysis to discover these components and suggest generalizations that can be derived from them. 
    more » « less
  3. This article reports on the existence of actual clause morphology and interpretation in selected Bantu languages. Essentially, we treat the actual clause as an embedded assertion whereby the utterer is committed not only to the truth of the proposition described by the actual clause: It must be the case that the event in the proposition cannot be unrealized (or describe a future state) at the time of the utterance. The Bantu languages in our sample mark the actual clause by a verbal prefix in a typical tense position on the lower verb. This prefix occurs as a single vowel or as a consonant/vowel combination. When the actual clause is a syntactic complement, it co-occurs with verbs that may be incompatible with indicative clauses. The clause is also semantically distinct from other clause types such as the infinitive and the subjunctive. Our analysis of actual clauses as assertions explains why they are not complements of factive verbs. We argue that the source of the speaker’s commitment to truth arises in part from the way actual clauses are licensed by the clauses they are dependent on. That is, we propose that actual clauses are licensed by a “contingent antecedent clause” which is taken to be a precondition for the actual clause assertion. Our approach generalizes to explain other non-complement uses of actual/narrative clause types, typically described as “narrative” tense in Bantu, which bears the exact same morphology 
    more » « less
  4. We investigate neural models’ ability to capture lexicosyntactic inferences: inferences triggered by the interaction of lexical and syntactic information. We take the task of event factuality prediction as a case study and build a factuality judgment dataset for all English clause-embedding verbs in various syntactic contexts. We use this dataset, which we make publicly available, to probe the behavior of current state-of-the-art neural systems, showing that these systems make certain systematic errors that are clearly visible through the lens of factuality prediction. 
    more » « less
  5. Natural language inference (NLI) datasets (e.g., MultiNLI) were collected by soliciting hypotheses for a given premise from annotators. Such data collection led to annotation artifacts: systems can identify the premise-hypothesis relationship without observing the premise (e.g., negation in hypothesis being indicative of contradiction). We address this problem by recasting the CommitmentBank for NLI, which contains items involving reasoning over the extent to which a speaker is committed to complements of clause-embedding verbs under entailment-canceling environments (conditional, negation, modal and question). Instead of being constructed to stand in certain relationships with the premise, hypotheses in the recast CommitmentBank are the complements of the clause-embedding verb in each premise, leading to no annotation artifacts in the hypothesis. A state-of-the-art BERT-based model performs well on the CommitmentBank with 85% F1. However analysis of model behavior shows that the BERT models still do not capture the full complexity of pragmatic reasoning, nor encode some of the linguistic generalizations, highlighting room for improvement. 
    more » « less