Peer-review plays a critical role in the scientific writing and publication ecosystem. To assess the efficiency and efficacy of the reviewing process, one essential element is to understand and evaluate the reviews themselves. In this work, we study the content and structure of peer reviews under the argument mining framework, through automatically detecting (1) the argumentative propositions put forward by reviewers, and (2) their types (e.g., evaluating the work or making suggestions for improvement). We first collect 14.2K reviews from major machine learning and natural language processing venues. 400 reviews are annotated with 10,386 propositions and corresponding types of Evaluation, Request, Fact, Reference, or Quote. We then train state-of-the-art proposition segmentation and classification models on the data to evaluate their utilities and identify new challenges for this new domain, motivating future directions for argument mining. Further experiments show that proposition usage varies across venues in amount, type, and topic. 
                        more » 
                        « less   
                    
                            
                            MetaWriter: Exploring the Potential and Perils of AI Writing Support in Scientific Peer Review
                        
                    
    
            Recent advances in Large Language Models (LLMs) show the potential to significantly augment or even replace complex human writing activities. However, for complex tasks where people need to make decisions as well as write a justification, the trade offs between making work efficient and hindering decisions remain unclear. In this paper, we explore this question in the context of designing intelligent scaffolding for writing meta-reviews for an academic peer review process. We prototyped a system called MetaWriter'' trained on five years of open peer review data to support meta-reviewing. The system highlights common topics in the original peer reviews, extracts key points by each reviewer, and on request, provides a preliminary draft of a meta-review that can be further edited. To understand how novice and experienced meta-reviewers use MetaWriter, we conducted a within-subject study with 32 participants. Each participant wrote meta-reviews for two papers: one with and one without MetaWriter. We found that MetaWriter significantly expedited the authoring process and improved the coverage of meta-reviews, as rated by experts, compared to the baseline. While participants recognized the efficiency benefits, they raised concerns around trust, over-reliance, and agency. We also interviewed six paper authors to understand their opinions of using machine intelligence to support the peer review process and reported critical reflections. We discuss implications for future interactive AI writing tools to support complex synthesis work. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2009003
- PAR ID:
- 10523828
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Human-Computer Interaction
- Volume:
- 8
- Issue:
- CSCW1
- ISSN:
- 2573-0142
- Page Range / eLocation ID:
- 1 to 32
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            East, Martin; Slomp, David (Ed.)Studies examining peer review demonstrate that students can learn from giving feedback to and receiving feedback from their peers, especially when they utilize information gained from the review process to revise. However, much of the research on peer review is situated within the literature regarding how students learn to write. With an increasing use of writing-to-learn in STEM classrooms, it is important to study how students engage in peer review for these types of writing assignments. This study sought to better understand how peer review and revision can support student learning for writing-to-learn specifically, using the lenses of cognitive perspectives of writing and engagement with written corrective feedback. Using a case study approach, we provide a detailed analysis of six students’ written artifacts in response to a writing-to-learn assignment that incorporated peer review and revision implemented in an organic chemistry course. Students demonstrated a range in the types of revisions they made and the extent to which the peer review process informed their revisions. Additionally, students exhibited surface, midlevel, and active engagement with the peer review and revision process. Considering the different engagement levels can inform how we frame peer review to students when using it as an instructional practice.more » « less
- 
            User adoption of security and privacy (S&P) best practices remains low, despite sustained efforts by researchers and practitioners. Social influence is a proven method for guiding user S&P behavior, though most work has focused on studying peer influence, which is only possible with a known social graph. In a study of 104 Facebook users, we instead demonstrate that crowdsourced S&P suggestions are significantly influential. We also tested how reflective writing affected participants’ S&P decisions, with and without suggestions. With reflective writing, participants were less likely to accept suggestions — both social and Facebook default suggestions. Of particular note, when reflective writing participants were shown the Facebook default suggestion, they not only rejected it but also (unknowingly) configured their settings in accordance with expert recommendations. Our work suggests that both non-personal social influence and reflective writing can positively influence users’ S&P decisions, but have negative interactions.more » « less
- 
            Bailey, Henry Hugh (Ed.)Many peer-review processes involve reviewers submitting their independent reviews, followed by a discussion between the reviewers of each paper. A common question among policymakers is whether the reviewers of a paper should be anonymous to each other during the discussion. We shed light on this question by conducting a randomized controlled trial at the Conference on Uncertainty in Artificial Intelligence (UAI) 2022 conference where reviewer discussions were conducted over a typed forum. We randomly split the reviewers and papers into two conditions–one with anonymous discussions and the other with non-anonymous discussions. We also conduct an anonymous survey of all reviewers to understand their experience and opinions. We compare the two conditions in terms of the amount of discussion, influence of seniority on the final decisions, politeness, reviewers’ self-reported experiences and preferences. Overall, this experiment finds small, significant differences favoring the anonymous discussion setup based on the evaluation criteria considered in this work.more » « less
- 
            Cameron, Carrie (Ed.)Grant writing is an essential skill to develop for academic and other career success but providing individual feedback to large numbers of trainees is challenging. In 2014, we launched the Stanford Biosciences Grant Writing Academy to support graduate students and postdocs in writing research proposals. Its core program is a multi-week Proposal Bootcamp designed to increase the feedback writers receive as they develop and refine their proposals. The Proposal Bootcamp consisted of two-hour weekly meetings that included mini lectures and peer review. Bootcamp participants also attended faculty review workshops to obtain faculty feedback. Postdoctoral trainees were trained and hired as course teaching assistants and facilitated weekly meetings and review workshops. Over the last six years, the annual Bootcamp has provided 525 doctoral students and postdocs with multi-level feedback (peer and faculty). Proposals from Bootcamp participants were almost twice as likely to be funded than proposals from non-Bootcamp trainees. Overall, this structured program provided opportunities for feedback from multiple peer and faculty reviewers, increased the participants’ confidence in developing and submitting research proposals, while accommodating a large number of participants.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    