skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Value of Activity Traces in Peer Evaluations: An Experimental Study
Peer evaluations are a well-established tool for evaluating individual and team performance in collaborative contexts, but are susceptible to social and cognitive biases. Current peer evaluation tools have also yet to address the unique opportunities that online collaborative technologies provide for addressing these biases. In this work, we explore the potential of one such opportunity for peer evaluations: data traces automatically generated by collaborative tools, which we refer to as "activity traces". We conduct a between-subjects experiment with 101 students and MTurk workers, investigating the effects of reviewing activity traces on peer evaluations of team members in an online collaborative task. Our findings show that the usage of activity traces led participants to make more and greater revisions to their evaluations compared to a control condition. These revisions also increased the consistency and participants' perceived accuracy of the evaluations that they received. Our findings demonstrate the value of activity traces as an approach for performing more reliable and objective peer evaluations of teamwork. Based on our findings as well as qualitative analysis of free-form responses in our study, we also identify and discuss key considerations and design recommendations for incorporating activity traces into real-world peer evaluation systems.  more » « less
Award ID(s):
2016908
PAR ID:
10460881
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
7
Issue:
CSCW1
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 39
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Peer evaluations are critical for assessing teams, but are susceptible to bias and other factors that undermine their reliability. At the same time, collaborative tools that teams commonly use to perform their work are increasingly capable of logging activity that can signal useful information about individual contributions and teamwork. To investigate current and potential uses for activity traces in peer evaluation tools, we interviewed (N=11) and surveyed (N=242) students and interviewed (N=10) instructors at a single university. We found that nearly all of the students surveyed considered specific contributions to the team outcomes when evaluating their teammates, but also reported relying on memory and subjective experiences to make the assessment. Instructors desired objective sources of data to address challenges with administering and interpreting peer evaluations, and have already begun incorporating activity traces from collaborative tools into their evaluations of teams. However, both students and instructors expressed concern about using activity traces due to the diverse ecosystem of tools and platforms used by teams and the limited view into the context of the contributions. Based on our findings, we contribute recommendations and a speculative design for a data-centric peer evaluation tool. 
    more » « less
  2. Managing large-scale projects in biomolecular visualization education presents unique challenges, especially when involving many contributors who generate resources over time. BioMolViz is a diverse group of faculty from multiple institutions promoting biomolecular visualization literacy, and our goal was to create a collaboratively designed repository of assessments to allow evaluation of students’ visual literacy skills. As we expanded our network and engaged large numbers of educators through online and in-person workshops and working groups, assessment ideas and revisions became challenging to organize. Our growing repository required a method to 1) track revisions, expert-panel reviews, and field-testing results, and 2) ultimately publish hundreds of visual literacy assessments. As we navigated this new space, we sought to streamline our approach, while continuing to engage valuable colleagues with varying levels of comfort with technology. Through collaboration tools, project management software, and a series of fits and starts, the internal team established a structured workflow that efficiently guided assessment items from development to public access. Project management software enabled effective collaboration across team members and ensured transparency and efficiency in tracking each item’s progress. We detail the trial-and-error process that enabled collaborative assessment design, our breakthrough in the identification of software that suited the project needs, and the process of guiding developers to create the repository we envisioned. Our workflow analysis offers a model for leveraging project management tools in similar educational contexts and optimizing database design. 
    more » « less
  3. Abstract BackgroundTeamwork has become a central element of engineering education. However, the race‐ and gender‐based marginalization prevalent in society is also prevalent in engineering student teams. These problematic dynamics limit learning opportunities, isolate historically marginalized students, and ultimately push students away from engineering, further reinforcing the demographic imbalances in the profession. PurposeWhile there are strategies to improve the experiences of marginalized students within teams, there are few tools for detecting marginalizing behaviors as they occur. The purpose of this work is to examine how peer evaluations collected as a normal part of an engineering course can be used as a window into team dynamics to reveal marginalization as it occurs. MethodWe used a semester of peer evaluation data from a large engineering course in which a team project is the central assignment and peer evaluation occurs four times during the course. We designed an algorithm to identify teams where marginalization may be occurring. We then performed qualitative analyses using a sociolinguistic analysis. ResultsResults show that the algorithm helps identify teams where marginalization occurs. Qualitative analyses of four illustrative cases demonstrated the stealth appearance and evolution of marginalization, providing strong evidence that hidden within language of peer evaluation are indicators of marginalization. Based on the wider dataset, we present a taxonomy (eight categories) of linguistic marginalization appearing in peer comments. ConclusionBoth peer evaluation scores and the language used in peer evaluations can reveal team inequities and may serve as a near‐real‐time mechanism to interrupt marginalization within engineering teams. 
    more » « less
  4. This is the first of a series of studies that explore the relationship between disciplinary background and the weighting of various elements of a manuscript in peer reviewers’ determination of publication recommendations. Research questions include: (1) To what extent are tacit criteria for determining quality or value of EER manuscripts influenced by reviewers’ varied disciplinary backgrounds and levels of expertise? and (2) To what extent does mentored peer review professional development influence reviewers’ EER manuscript evaluations? Data were collected from 27 mentors and mentees in a peer review professional development program. Participants reviewed the same two manuscripts, using a form to identify strengths, weaknesses, and recommendations. Responses were coded by two researchers (70% IRR). Our findings suggest that disciplinary background influences reviewers’ evaluation of EER manuscripts. We also found evidence that professional development can improve reviewers’ understanding of EER disciplinary conventions. Deeper understanding of the epistemological basis for manuscript reviews may reveal ways to strengthen professional preparation in engineering education as well as other disciplines. 
    more » « less
  5. This is the first of a series of studies that explore the relationship between disciplinary background and the weighting of various elements of a manuscript in peer reviewers’ determination of publication recommendations. Research questions include: (1) To what extent are tacit criteria for determining quality or value of EER manuscripts influenced by reviewers’ varied disciplinary backgrounds and levels of expertise? and (2) To what extent does mentored peer review professional development influence reviewers’ EER manuscript evaluations? Data were collected from 27 mentors and mentees in a peer review professional development program. Participants reviewed the same two manuscripts, using a form to identify strengths, weaknesses, and recommendations. Responses were coded by two researchers (70% IRR). Our findings suggest that disciplinary background influences reviewers’ evaluation of EER manuscripts. We also found evidence that professional development can improve reviewers’ understanding of EER disciplinary conventions. Deeper understanding of the epistemological basis for manuscript reviews may reveal ways to strengthen professional preparation in engineering education as well as other disciplines. 
    more » « less