skip to main content


Search for: All records

Award ID contains: 2016908

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Jeff Nichols (Ed.)
    Instructors using algorithmic team formation tools must decide which criteria (e.g., skills, demographics, etc.) to use to group students into teams based on their teamwork goals, and have many possible sources from which to draw these configurations (e.g., the literature, other faculty, their students, etc.). However, tools offer considerable flexibility and selecting ineffective configurations can lead to teams that do not collaborate successfully. Due to such tools’ relative novelty, there is currently little knowledge of how instructors choose which of these sources to utilize, how they relate different criteria to their goals for the planned teamwork, or how they determine if their configuration or the generated teams are successful. To close this gap, we conducted a survey (N=77) and interview (N=21) study of instructors using CATME Team-Maker and other criteria-based processes to investigate instructors’ goals and decisions when using team formation tools. The results showed that instructors prioritized students learning to work with diverse teammates and performed “sanity checks” on their formation approach’s output to ensure that the generated teams would support this goal, especially focusing on criteria like gender and race. However, they sometimes struggled to relate their educational goals to specific settings in the tool. In general, they also did not solicit any input from students when configuring the tool, despite acknowledging that this information might be useful. By opening the “black box” of the algorithm to students, more learner-centered approaches to forming teams could therefore be a promising way to provide more support to instructors configuring algorithmic tools while at the same time supporting student agency and learning about teamwork. 
    more » « less
  2. High-quality source code comments are valuable for software development and maintenance, however, code often contains low-quality comments or lacks them altogether. We name such source code comments as suboptimal comments. Such suboptimal comments create challenges in code comprehension and maintenance. Despite substantial research on low-quality source code comments, empirical knowledge about commenting practices that produce suboptimal comments and reasons that lead to suboptimal comments are lacking. We help bridge this knowledge gap by investigating (1)  independent comment changes (ICCs) —comment changes committed independently of code changes—which likely address suboptimal comments, (2) commenting guidelines, and (3) comment-checking tools and comment-generating tools, which are often employed to help commenting practice—especially to prevent suboptimal comments. We collect 24M+ comment changes from 4,392 open-source GitHub Java repositories and find that ICCs widely exist. The ICC ratio —proportion of ICCs among all comment changes—is ~15.5%, with 98.7% of the repositories having ICC. Our thematic analysis of 3,533 randomly sampled ICCs provides a three-dimensional taxonomy for what is changed (four comment categories and 13 subcategories), how it changed (six commenting activity categories), and what factors are associated with the change (three factors). We investigate 600 repositories to understand the prevalence, content, impact, and violations of commenting guidelines. We find that only 15.5% of the 600 sampled repositories have any commenting guidelines. We provide the first taxonomy for elements in commenting guidelines: where and what to comment are particularly important. The repositories without such guidelines have a statistically significantly higher ICC ratio, indicating the negative impact of the lack of commenting guidelines. However, commenting guidelines are not strictly followed: 85.5% of checked repositories have violations. We also systematically study how developers use two kinds of tools, comment-checking tools and comment-generating tools, in the 4,392 repositories. We find that the use of Javadoc tool is negatively correlated with the ICC ratio, while the use of Checkstyle has no statistically significant correlation; the use of comment-generating tools leads to a higher ICC ratio. To conclude, we reveal issues and challenges in current commenting practice, which help understand how suboptimal comments are introduced. We propose potential research directions on comment location prediction, comment generation, and comment quality assessment; suggest how developers can formulate commenting guidelines and enforce rules with tools; and recommend how to enhance current comment-checking and comment-generating tools. 
    more » « less
  3. Peer evaluations are a well-established tool for evaluating individual and team performance in collaborative contexts, but are susceptible to social and cognitive biases. Current peer evaluation tools have also yet to address the unique opportunities that online collaborative technologies provide for addressing these biases. In this work, we explore the potential of one such opportunity for peer evaluations: data traces automatically generated by collaborative tools, which we refer to as "activity traces". We conduct a between-subjects experiment with 101 students and MTurk workers, investigating the effects of reviewing activity traces on peer evaluations of team members in an online collaborative task. Our findings show that the usage of activity traces led participants to make more and greater revisions to their evaluations compared to a control condition. These revisions also increased the consistency and participants' perceived accuracy of the evaluations that they received. Our findings demonstrate the value of activity traces as an approach for performing more reliable and objective peer evaluations of teamwork. Based on our findings as well as qualitative analysis of free-form responses in our study, we also identify and discuss key considerations and design recommendations for incorporating activity traces into real-world peer evaluation systems. 
    more » « less
  4. The configuration that an instructor enters into an algorithmic team formation tool determines how students are grouped into teams, impacting their learning experiences. One way to decide the configuration is to solicit input from the students. Prior work has investigated the criteria students prefer for team formation, but has not studied how students prioritize the criteria or to what degree students agree with each other. This paper describes a workflow for gathering student preferences for how to weight the criteria entered into a team formation tool, and presents the results of a study in which the workflow was implemented in four semesters of the same project-based design course. In the most recent semester, the workflow was supplemented with an online peer discussion to learn about students' rationale for their selections. Our results show that students want to be grouped with other students who share the same course commitment and compatible schedules the most. Students prioritize demographic attributes next, and then task skills such as programming needed for the project work. We found these outcomes to be consistent in each instance of the course. Instructors can use our results to guide team formation in their own project-based design courses and replicate our workflow to gather student preferences for team formation in any course. 
    more » « less
  5. Peer evaluations are critical for assessing teams, but are susceptible to bias and other factors that undermine their reliability. At the same time, collaborative tools that teams commonly use to perform their work are increasingly capable of logging activity that can signal useful information about individual contributions and teamwork. To investigate current and potential uses for activity traces in peer evaluation tools, we interviewed (N=11) and surveyed (N=242) students and interviewed (N=10) instructors at a single university. We found that nearly all of the students surveyed considered specific contributions to the team outcomes when evaluating their teammates, but also reported relying on memory and subjective experiences to make the assessment. Instructors desired objective sources of data to address challenges with administering and interpreting peer evaluations, and have already begun incorporating activity traces from collaborative tools into their evaluations of teams. However, both students and instructors expressed concern about using activity traces due to the diverse ecosystem of tools and platforms used by teams and the limited view into the context of the contributions. Based on our findings, we contribute recommendations and a speculative design for a data-centric peer evaluation tool. 
    more » « less
  6. Team formation tools assume instructors should configure the criteria for creating teams, precluding students from participating in a process affecting their learning experience. We propose LIFT, a novel learner-centered workflow where students propose, vote for, and weigh the criteria used as inputs to the team formation algorithm. We conducted an experiment (N=289) comparing LIFT to the usual instructor-led process, and interviewed participants to evaluate their perceptions of LIFT and its outcomes. Learners proposed novel criteria not included in existing algorithmic tools, such as organizational style. They avoided criteria like gender and GPA that instructors frequently select, and preferred those promoting efficient collaboration. LIFT led to team outcomes comparable to those achieved by the instructor-led approach, and teams valued having control of the team formation process. We provide instructors and designers with a workflow and evidence supporting giving learners control of the algorithmic process used for grouping them into teams. 
    more » « less