skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1915196

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The retrospective is a crucial component of the agile software development process. In previous studies of retrospectives in undergraduate team software development projects, students exhibited limited and shallow reflection. We speculate that this is due to students' limited experience with reflection and the absence of clear guidance for engaging in deep reflection during agile retrospectives. To explore the potential for a pedagogical intervention to foster deeper reflection in retrospectives, we present an empirical comparison of a standard retrospective model against an enhanced retrospective model that scaffolds deeper levels of reflection by prompting students to justify and critique their practices and weigh alternative approaches. Through a systematic classification of the reflection level of statements made during individual brainstorming and team discussion phases of retrospectives, our study found that the enhanced model led to individuals and teams engaging in significantly higher levels of reflection. Our findings contribute to improving software engineering education by demonstrating the efficacy of an enhanced pedagogical model for team retrospectives. 
    more » « less
    Free, publicly-accessible full text available February 28, 2026
  2. The retrospective, or retro, is a fundamental component of the Agile process, widely used in both software engineering courses and industry. In a retro, teams come together at the end of a sprint to reflect on their team's performance. We conducted an empirical study to explore three research questions concerning retros in undergraduate team projects: (1) What do students reflect on? (2) What is the quality of their reflections? (3) How do teams' retros vary in terms of content and quality? Our study analyzed a corpus of 963 statements documented in the retros of 32 undergraduate software teams (n = 182 students) enrolled in four software engineering courses at two North American universities. A content analysis revealed that teams reflected most often on their work, communication, and collaboration practices. Nearly a third of teams' reflections focused on their general work practices, while nearly half focused on specific areas of the software development lifecycle---most prominently, pull requests, issues, and coding/testing/debugging. An analysis of the quality of teams' retro reflections showed that only 13% provided justification for a strategy to be stopped, continued, or started. An analysis of team-by-team results indicated significant differences in teams' retro content and quality. We compare these results to previous studies of retros in academia and industry, and consider their implications for software engineering education. 
    more » « less
  3. Assessing team software development projects is notoriously difficult and typically based on subjective metrics. To help make assessments more rigorous, we conducted an empirical study to explore relationships between subjective metrics based on peer and instructor assessments, and objective metrics based on GitHub and chat data. We studied 23 undergraduate software teams (n= 117 students) from two undergraduate computing courses at two North American research universities. We collected data on teams’ (a) commits and issues from their GitHub code repositories, (b) chat messages from their Slack and Microsoft Teams channels, (c) peer evaluation ratings from the CATME peer evaluation system, and (d) individual assignment grades from the courses. We derived metrics from (a) and (b) to measure both individual team members’contributionsto the team, and theequalityof team members’ contributions. We then performed Pearson analyses to identify correlations among the metrics, peer evaluation ratings, and individual grades. We found significant positive correlations between team members’ GitHub contributions, chat contributions, and peer evaluation ratings. In addition, the equality of teams’ GitHub contributions was positively correlated with teams’ average peer evaluation ratings and negatively correlated with the variance in those ratings. However, no such positive correlations were detected between the equality of teams’ chat contributions and their peer evaluation ratings. Our study extends previous research results by providing evidence that (a) team members’ chat contributions, like their GitHub contributions, are positively correlated with their peer evaluation ratings; (b) team members’ chat contributions are positively correlated with their GitHub contributions; and (c) the equality of team’ GitHub contributions is positively correlated with their peer evaluation ratings. These results lend further support to the idea that combining objective and subjective metrics can make the assessment of team software projects more comprehensive and rigorous. 
    more » « less
  4. Metacognition is widely acknowledged as a key soft skill in collaborative software development. The ability to plan, monitor, and reflect on cognitive and team processes is crucial to the efficient and effective functioning of a software team. To explore students' use of reflection--one aspect of metacognition--in undergraduate team software projects, we analyzed the online chat channels of teams participating in agile software development projects in two undergraduate courses that took place exclusively online (n = 23 teams, 117 students, and 4,915 chat messages). Teams' online chats were dominated by discussions of work completed and to be done; just two percent of all chat messages showed evidence of reflection. A follow-up analysis of chat vignettes centered around reflection messages (n = 63) indicates that three-fourths of the those messages were prompted by a course requirement; just 14\% arose organically within the context of teams' ongoing project work. Based on our findings, we identify opportunities for computing educators to increase, through pedagogical and technological interventions, teams' use of reflection in team software projects. 
    more » « less
  5. null (Ed.)
    Students’ experience with software testing in undergraduate computing courses is often relatively shallow, as compared to the importance of the topic. This experience report describes introducing industrial-strength testing into CMPSC 156, an upper division course in software engineering at UC Santa Barbara . We describe our efforts to modify our software engineering course to introduce rigorous test-coverage requirements into full-stack web development projects, requirements similar to those the authors had experienced in a professional software development setting. We present student feedback on the course and coverage metrics for the projects. We reflect on what about these changes worked (or didn’t), and provide suggestions for other instructors that would like to give their students a deeper experience with software testing in their software engineering courses. 
    more » « less
  6. null (Ed.)
    Providing students with authentic software development experiences is essential to preparing them for careers in industry. To that end, many undergraduate courses include a team-based software development experience in which each team works on a different software project. This raises significant challenges for assessing student work and measuring the impact of pedagogical interventions: What do we measure and how, when each team is working on a different project? To address this question, we present a collection of metrics developed using the Goal-Question-Metric framework from the empirical software engineering literature, and an empirical study in which we applied those metrics to assess 23 team software projects involving 94 students at three institutions. Study results suggest that these metrics, which gauge commit, issue, and overall product quality, are sensitive to differences in the quality of teams' processes and products. This work contributes a new metric-based approach to evaluating key aspects of software development processes and products in a wide variety of computing courses. 
    more » « less