skip to main content


Title: Rudolf Christoph Eucken at SemEval-2023 Task 4: An Ensemble Approach for Identifying Human Values from Arguments
The subtle human values we acquire through life experiences govern our thoughts and gets reflected in our speech. It plays an integral part in capturing the essence of our individuality and making it imperative to identify such values in computational systems that mimic human actions. Computational argumentation is a field that deals with the argumentation capabilities of humans and can benefit from identifying such values. Motivated by that, we present an ensemble approach for detecting human values from argument text. Our ensemble comprises three models: (i) An entailment-based model for determining the human values based on their descriptions, (ii) A Roberta-based classifier that predicts the set of human values from an argument. (iii) A Roberta-based classifier to predict a reduced set of human values from an argument. We experiment with different ways of combining the models and report our results. Furthermore, our best combination achieves an overall F1 score of 0.48 on the main test set.  more » « less
Award ID(s):
2214070
NSF-PAR ID:
10441740
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of SemEval-2023 Task 4
Page Range / eLocation ID:
660 to 663
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Effective argumentation is essential towards a purposeful conversation with a satisfactory outcome. For example, persuading someone to reconsider smoking might involve empathetic, well founded arguments based on facts and expert opinions about its ill-effects and the consequences on one’s family. However, the automatic generation of high-quality factual arguments can be challenging. Addressing existing controllability issues can make the recent advances in computational models for argument generation a potential solution. In this paper, we introduce ArgU: a neural argument generator capable of producing factual arguments from input facts and real-world concepts that can be explicitly controlled for stance and argument structure using Walton’s argument scheme-based control codes. Unfortunately, computational argument generation is a relatively new field and lacks datasets conducive to training. Hence, we have compiled and released an annotated corpora of 69,428 arguments spanning six topics and six argument schemes, making it the largest publicly available corpus for identifying argument schemes; the paper details our annotation and dataset creation framework. We further experiment with an argument generation strategy that establishes an inference strategy by generating an “argument template” before actual argument generation. Our results demonstrate that it is possible to automatically generate diverse arguments exhibiting different inference patterns for the same set of facts by using control codes based on argument schemes and stance. 
    more » « less
  2. Abstract

    In order to deepen students' understanding of natural phenomenon and how scientific knowledge is constructed, it is critical that science teachers learn how to engage students in productive scientific argumentation. Simulations for teachers are one possible solution to providing practice‐based spaces where novices can approximate the work of facilitating argumentation‐focused science discussions. This study's purpose is to examine how preservice elementary teachers (PSETs) engage in this ambitious teaching practice within an online simulated classroom composed of five upper elementary student avatars. In this study, which is part of a larger research project, we developed and used four performance tasks to provide opportunities for PSETs to practice facilitating argumentation‐focused science discussions within a simulated classroom. The student avatars were controlled on the backend by a human‐in‐the‐loop who was trained to respond to the teachers' prompts in real time using predesigned student thinking profiles and specific technology, such as voice modulation software. We used analysis of transcripts from the PSETs' video‐recorded discussions to examine how the PSETs engaged the student avatars in scientific argumentation, with particular attention to the teaching moves that supported argument construction and argument critique. We also used survey and interview data to examine how the PSETs viewed the usefulness of these simulation‐based tools to support their learning. Findings show that there was variability in the extent to which the PSETs engaged the student avatars in argument construction and argument critique and the teaching moves that the PSETs used to do so. Results also indicated that PSETs strongly perceive the value of using such tools within teacher education. Implications for the potential of simulations to provide insights into novices' ability to engage students in scientific argumentation and to support them in learning in and from their practice, including how to productively integrate these tools in teacher education, are discussed.

     
    more » « less
  3. Hateful comments are prevalent on social media platforms. Although tools for automatically detecting, flagging, and blocking such false, offensive, and harmful content online have lately matured, such reactive and brute force methods alone provide short-term and superficial remedies while the perpetrators persist. With the public availability of large language models which can generate articulate synthetic and engaging content at scale, there are concerns about the rapid growth of dissemination of such malicious content on the web. There is now a need to focus on deeper, long-term solutions that involve engaging with the human perpetrator behind the source of the content to change their viewpoint or at least bring down the rhetoric using persuasive means. To do that, we propose defining and experimenting with controllable strategies for generating counterarguments to hateful comments in online conversations. We experiment with controlling response generation using features based on (i) argument structure and reasoning-based Walton argument schemes, (ii) counter-argument speech acts, and (iii) human characteristicsbased qualities such as Big-5 personality traits and human values. Using automatic and human evaluations, we determine the best combination of features that generate fluent, argumentative, and logically sound arguments for countering hate. We further share the developed computational models for automatically annotating text with such features, and a silver-standard annotated version of an existing hate speech dialog corpora. 
    more » « less
  4. High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topic-relevant content than popular sequence-to-sequence generation models according to automatic evaluation and human assessments. 
    more » « less
  5. Shaffer, Justin (Ed.)
    ABSTRACT Argumentation is vital in the development of scientific knowledge, and students who can argue from evidence and support their claims develop a deeper understanding of science. In this study, the Argument-Driven Inquiry instruction model was implemented in a two-semester sequence of introductory biology laboratories. Student’s scientific argumentation sessions were video recorded and analyzed using the Assessment of Scientific Argumentation in the Classroom observation protocol. This protocol separates argumentation into three subcategories: cognitive (how the group develops understanding), epistemic (how consistent the group’s process is with the culture of science), and social (how the group members interact with each other). We asked whether students are equally skilled in all subcategories of argumentation and how students’ argumentation skills differ based on lab exercise and course. Students scored significantly higher on the social than the cognitive and epistemic subcategories of argumentation. Total argumentation scores were significantly different between the two focal investigations in Biology Laboratory I but not between the two focal investigations in Biology Laboratory II. Therefore, student argumentation skills were not consistent across content; the design of the lab exercises and their implementation impacted the level of argumentation that occurred. These results will ultimately aid in the development and expansion of Argument-Driven Inquiry instructional models, with the goal of further enhancing students’ scientific argumentation skills and understanding of science. 
    more » « less