- Award ID(s):
- 2016908
- PAR ID:
- 10460884
- Date Published:
- Journal Name:
- Proceedings of the CHI Conference on Human Factors in Computing Systems
- Page Range / eLocation ID:
- 1 to 13
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
The configuration that an instructor enters into an algorithmic team formation tool determines how students are grouped into teams, impacting their learning experiences. One way to decide the configuration is to solicit input from the students. Prior work has investigated the criteria students prefer for team formation, but has not studied how students prioritize the criteria or to what degree students agree with each other. This paper describes a workflow for gathering student preferences for how to weight the criteria entered into a team formation tool, and presents the results of a study in which the workflow was implemented in four semesters of the same project-based design course. In the most recent semester, the workflow was supplemented with an online peer discussion to learn about students' rationale for their selections. Our results show that students want to be grouped with other students who share the same course commitment and compatible schedules the most. Students prioritize demographic attributes next, and then task skills such as programming needed for the project work. We found these outcomes to be consistent in each instance of the course. Instructors can use our results to guide team formation in their own project-based design courses and replicate our workflow to gather student preferences for team formation in any course.more » « less
-
Jeff Nichols (Ed.)Instructors using algorithmic team formation tools must decide which criteria (e.g., skills, demographics, etc.) to use to group students into teams based on their teamwork goals, and have many possible sources from which to draw these configurations (e.g., the literature, other faculty, their students, etc.). However, tools offer considerable flexibility and selecting ineffective configurations can lead to teams that do not collaborate successfully. Due to such tools’ relative novelty, there is currently little knowledge of how instructors choose which of these sources to utilize, how they relate different criteria to their goals for the planned teamwork, or how they determine if their configuration or the generated teams are successful. To close this gap, we conducted a survey (N=77) and interview (N=21) study of instructors using CATME Team-Maker and other criteria-based processes to investigate instructors’ goals and decisions when using team formation tools. The results showed that instructors prioritized students learning to work with diverse teammates and performed “sanity checks” on their formation approach’s output to ensure that the generated teams would support this goal, especially focusing on criteria like gender and race. However, they sometimes struggled to relate their educational goals to specific settings in the tool. In general, they also did not solicit any input from students when configuring the tool, despite acknowledging that this information might be useful. By opening the “black box” of the algorithm to students, more learner-centered approaches to forming teams could therefore be a promising way to provide more support to instructors configuring algorithmic tools while at the same time supporting student agency and learning about teamwork.more » « less
-
Peer evaluations are critical for assessing teams, but are susceptible to bias and other factors that undermine their reliability. At the same time, collaborative tools that teams commonly use to perform their work are increasingly capable of logging activity that can signal useful information about individual contributions and teamwork. To investigate current and potential uses for activity traces in peer evaluation tools, we interviewed (N=11) and surveyed (N=242) students and interviewed (N=10) instructors at a single university. We found that nearly all of the students surveyed considered specific contributions to the team outcomes when evaluating their teammates, but also reported relying on memory and subjective experiences to make the assessment. Instructors desired objective sources of data to address challenges with administering and interpreting peer evaluations, and have already begun incorporating activity traces from collaborative tools into their evaluations of teams. However, both students and instructors expressed concern about using activity traces due to the diverse ecosystem of tools and platforms used by teams and the limited view into the context of the contributions. Based on our findings, we contribute recommendations and a speculative design for a data-centric peer evaluation tool.more » « less
-
Teaching engineering students how to work in teams is necessary, important, and hard to do well. Minoritized students experience forms of marginalization from their teammates routinely, which affects their access to safe learning environments. Team evaluation tools like CATME can help instructors see where teaming problems are, but are often normed in ways that obscure the subtle if pervasive harassment of minoritized teammates. Instructors, particularly of large courses, need better ways to identify teams that are marginalizing minoritized team members. This paper introduces theory on microaggressions, selective incivility theory, and coded language to interpret data collected from a complex study site during the COVID-19 pandemic. The team collected data from classroom observations (moved virtual during COVID), interviews with instructors, interviews with students, interpretations of historical data collected through an online team evaluation tool called CATME, and a diary study where students documented their reflections on their marginalization by teammates. While data collection and analysis did not, of course, go as the research team had planned, it yielded insights into how frequently minoritized teammates experience marginalization, instructors’ sense of their responsibility and skill for addressing such, marginalization, and students’ sense of defeat in hoping for more equitable and supportive learning environments. The paper describes our data collection processes, analysis, and some choice insights drawn from this multi-year study at a large, research-extensive white university.more » « less
-
Abstract Background The first day of class helps students learn about what to expect from their instructors and courses. Messaging used by instructors, which varies in content and approach on the first day, shapes classroom social dynamics and can affect subsequent learning in a course. Prior work established the non-content Instructor Talk Framework to describe the language that instructors use to create learning environments, but little is known about the extent to which students detect those messages. In this study, we paired first day classroom observation data with results from student surveys to measure how readily students in introductory STEM courses detect non-content Instructor Talk.
Results To learn more about the instructor and student first day experiences, we studied 11 introductory STEM courses at two different institutions. The classroom observation data were used to characterize course structure and use of non-content Instructor Talk. The data revealed that all instructors spent time discussing their instructional practices, building instructor/student relationships, and sharing strategies for success with their students. After class, we surveyed students about the messages their instructors shared during the first day of class and determined that the majority of students from within each course detected messaging that occurred at a higher frequency. For lower frequency messaging, we identified nuances in what students detected that may help instructors as they plan their first day of class.
Conclusions For instructors who dedicate the first day of class to establishing positive learning environments, these findings provide support that students are detecting the messages. Additionally, this study highlights the importance of instructors prioritizing the messages they deem most important and giving them adequate attention to more effectively reach students. Setting a positive classroom environment on the first day may lead to long-term impacts on student motivation and course retention. These outcomes are relevant for all students, but in particular for students in introductory STEM courses which are often critical prerequisites for being in a major.