skip to main content


Title: The Role of Pragmatic and Discourse Context in Determining Argument Impact
Research in the social sciences and psychology has shown that the persuasiveness of an argument depends not only the language employed, but also on attributes of the source/communicator, the audience, and the appropriateness and strength of the argument’s claims given the pragmatic and discourse context of the argument. Among these characteristics of persuasive arguments, prior work in NLP does not explicitly investigate the effect of the pragmatic and discourse context when determining argument quality. This paper presents a new dataset to initiate the study of this aspect of argumentation: it consists of a diverse collection of arguments covering 741 controversial topics and comprising over 47,000 claims. We further propose predictive models that incorporate the pragmatic and discourse context of argumentative claims and show that they outperform models that rely only on claim-specific linguistic features for predicting the perceived impact of individual claims within a particular line of argument.  more » « less
Award ID(s):
1741441
NSF-PAR ID:
10203988
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Page Range / eLocation ID:
5668 to 5678
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Systems for automatic argument generation and debate require the ability to (1) determine the stance of any claims employed in the argument and (2) assess the specificity of each claim relative to the argument context. Existing work on understanding claim specificity and stance, however, has been limited to the study of argumentative structures that are relatively shallow, most often consisting of a single claim that directly supports or opposes the argument thesis. In this paper, we tackle these tasks in the context of complex arguments on a diverse set of topics. In particular, our dataset consists of manually curated argument trees for 741 controversial topics covering 95,312 unique claims; lines of argument are generally of depth 2 to 6. We find that as the distance between a pair of claims increases along the argument path, determining the relative specificity of a pair of claims becomes easier and determining their relative stance becomes harder. 
    more » « less
  2. Most existing methods for automatic fact-checking start with a precompiled list of claims to verify. We investigate the understudied problem of determining what statements in news articles are worthy to fact-check. We annotate the argument structure of 95 news articles in the climate change domain that are fact-checked by climate scientists at climatefeedback.org. We release the first multi-layer annotated corpus for both argumentative discourse structure (argument types and relations) and for fact-checked statements in news articles. We discuss the connection between argument structure and check-worthy statements and develop several baseline models for detecting check-worthy statements in the climate change domain. Our preliminary results show that using information about argumentative discourse structure shows slight but statistically significant improvement over a baseline of local discourse structure. 
    more » « less
  3. We investigate the problem of sentence-level supporting argument detection from relevant documents for user-specified claims. A dataset containing claims and associated citation articles is collected from online debate website idebate.org. We then manually label sentence-level supporting arguments from the documents along with their types as study, factual, opinion, or reasoning. We further characterize arguments of different types, and explore whether leveraging type information can facilitate the supporting arguments detection task. Experimental results show that LambdaMART (Burges, 2010) ranker that uses features informed by argument types yields better performance than the same ranker trained without type information. 
    more » « less
  4. Abstract

    Children use syntax to learn verbs, in a process known as syntactic bootstrapping. The structure‐mapping account proposes that syntactic bootstrapping begins with a universal bias to map each noun phrase in a sentence onto a participant role in a structured conceptual representation of an event. Equipped with this bias, children interpret thenumber of noun phrasesaccompanying a new verb as evidence about the semantic predicate–argument structure of the sentence, and therefore about the meaning of the verb. In this paper, we first review evidence for the structure–mapping account, and then discuss challenges to the account arising from the existence of languages that allow verbs' arguments to be omitted, such as Korean. These challenges prompt us to (a) refine our notion of the distributional learning mechanisms that create representations of sentence structure, and (b) propose that anexpectation of discourse continuityallows children to gather linguistic evidence for each verb’s arguments across sentences in a coherent discourse. Taken together, the proposed learning mechanisms and biases sketch a route whereby simple aspects of sentence structure guide verb learning from the start of multi‐word sentence comprehension, and do so even if some of the new verb’s arguments are omitted due to discourse redundancy.

     
    more » « less
  5. Abstract

    Language is not only used to transmit neutral information; we often seek to persuade by arguing in favor of a particular view. Persuasion raises a number of challenges for classical accounts of belief updating, as information cannot be taken at face value. How should listeners account for a speaker’s “hidden agenda” when incorporating new information? Here, we extend recent probabilistic models of recursive social reasoning to allow for persuasive goals and show that our model provides a pragmatic account for why weakly favorable arguments may backfire, a phenomenon known as the weak evidence effect. Critically, this model predicts a systematic relationship between belief updates and expectations about the information source: weak evidence should only backfire when speakers are expected to act under persuasive goals and prefer the strongest evidence. We introduce a simple experimental paradigm called the Stick Contest to measure the extent to which the weak evidence effect depends on speaker expectations, and show that a pragmatic listener model accounts for the empirical data better than alternative models. Our findings suggest further avenues for rational models of social reasoning to illuminate classical decision-making phenomena.

     
    more » « less