Abstract Science as an enterprise has been and continues to be exclusionary, perpetuating inequities among whose voice is heard as well as what/whose knowledge is recognized as valid. Women, people of color, and persons with disabilities are still vastly outnumbered in science and engineering by their White, male counterparts. These types of imbalances create a gatekeeping culture of inequity and inaccessibility, particularly for traditionally underrepresented students. Science classrooms, especially at the undergraduate level, strive to mimic the broader practices of the scientific community and therefore have tremendous potential to perpetuate the exclusion of certain groups of people. They also have, however, the potential to be a catalyst for equitable participation in science. Utilizing pedagogies of empowerment such as culturally responsive science teaching (CRST) in undergraduate classrooms can mitigate the gatekeeping phenomenon seen in science. Teaching assistants (TAs) engage in more one‐on‐one time with students than most faculty in undergraduate biology education, yet minimal pedagogical training is offered to them. Therefore, training for improved pedagogical knowledge is important for TAs, but training for CRST is critical as TAs have broad and potentially lasting impact on students. This study explores the ways in which undergraduate biology TAs enact CRST. Using constructivist grounded theory methods, this study examined TAs' reflections, observation field notes, semistructured interviews, and focus groups to develop themes surrounding their enactment of CRST. Findings from this study showed that undergraduate biology TAs enact CRST in ways described by four themes:Funds of Knowledge Connections,Differentiating Instruction,Intentional Scaffolding, andReducing Student Anxiety. These findings provide new insights into the ways undergraduate science education might be reimagined to create equitable science learning opportunities for all students.
more »
« less
How do Laboratory Teaching Assistants Learn to Support Science Practices? Exploring the Intersection Between Instructor Reasoning and Actions
We provide analysis of how TAs implement a curriculum designed to engage introductory biology students in scientific modeling. TAs in-the-moment interactions with students varied, reflecting different instructional purposes and instructor roles. We present mechanisms of TA learning and ideas for professional development.
more »
« less
- Award ID(s):
- 2400787
- PAR ID:
- 10553127
- Editor(s):
- Lo, Stanley
- Publisher / Repository:
- CBE: Life Sciences Education
- Date Published:
- Journal Name:
- CBE—Life Sciences Education
- Volume:
- 23
- Issue:
- 4
- ISSN:
- 1931-7913
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)ABSTRACT Evidence-based teaching practices (EBTP)—like inquiry-based learning, inclusive teaching, and active learning—have been shown to benefit all students, especially women, first-generation, and traditionally minoritized students in science fields. However, little research has focused on how best to train teaching assistants (TAs) to use EBTP or on which components of professional development are most important. We designed and experimentally manipulated a series of pre-semester workshops on active learning (AL), dividing subjects into two groups. The Activity group worked in teams to learn an AL technique with a workshop facilitator. These teams then modeled the activity with their peers acting as students. In the Evidence group, facilitators modeled the activities with all TAs acting as students. We used a mixed-methods research design (specifically, concurrent triangulation) to interpret pre- and post-workshop and post-semester survey responses. We found that Evidence group participants reported greater knowledge of AL after the workshop than Activity group participants. Activity group participants, on the other hand, found all of the AL techniques more useful than Evidence group participants. These results suggest that actually modeling AL techniques made them more useful to TAs than simply experiencing the same techniques as students—even with the accompanying evidence. This outcome has broad implications for how we provide professional development sessions to TAs and potentially to faculty.more » « less
-
To address the increasing demand for AI literacy, we introduced a novel active learning approach that leverages both teaching assistants (TAs) and generative AI to provide feedback during in-class exercises. This method was evaluated through two studies in separate Computer Science courses, focusing on the roles and impacts of TAs in this learning environment, as well as their collaboration with ChatGPT in enhancing student feedback. The studies revealed that TAs were effective in accurately determining students’ progress and struggles, particularly in areas such as “backtracking”, where students faced significant challenges. This intervention’s success was evident from high student engagement and satisfaction levels, as reported in an end-of-semester survey. Further findings highlighted that while TAs provided detailed technical assessments and identified conceptual gaps effectively, ChatGPT excelled in presenting clarifying examples and offering motivational support. Despite some TAs’ resistance to fully embracing the feedback guidelines-specifically their reluctance to provide encouragement-the collaborative feedback process between TAs and ChatGPT improved the quality of feedback in several aspects, including technical accuracy and clarity in explaining conceptual issues. These results suggest that integrating human and artificial intelligence in educational settings can significantly enhance traditional teaching methods, creating a more dynamic and responsive learning environment. Future research will aim to improve both the quality and efficiency of feedback, capitalizing on unique strengths of both human and AI to further advance educational practices in the field of computing.more » « less
-
Cristea, Alexandra; Walker, Erin; Lu, Yu; Santos, Olga (Ed.)This project examines the prospect of using AI-generated feedback as suggestions to expedite and enhance human instructors’ feedback provision. In particular, we focus on understanding the teaching assistants’ perspectives on the quality of AI-generated feedback and how they may or may not utilize AI feedback in their own workflows. We situate our work in a foundational college Economics class, which has frequent short essay assignments. We developed an LLM-powered feedback engine that generates feedback on students’ essays based on grading rubrics used by the teaching assistants (TAs). To ensure that TAs can meaningfully critique and engage with the AI feedback, we had them complete their regular grading jobs. For a randomly selected set of essays that they had graded, we used our feedback engine to generate feedback and displayed the feedback as in-text comments in a Word document. We then performed think-aloud studies with 5 TAs over 20 1-hour sessions to have them evaluate the AI feedback, contrast the AI feedback with their handwritten feedback, and share how they envision using the AI feedback if they were offered as suggestions. The study highlights the importance of providing detailed rubrics for AI to generate high-quality feedback for knowledge-intensive essays. TAs considered that using AI feedback as suggestions during their grading could expedite grading, enhance consistency, and improve overall feedback quality. We discuss the importance of decomposing the feedback generation task into steps and presenting intermediate results, in order for TAs to use the AI feedback.more » « less
-
Cristea, Alexandra; Walker, Erin; Lu, Yu; Santos, Olga (Ed.)This project examines the prospect of using AI-generated feedback as suggestions to expedite and enhance human instructors’ feedback provision. In particular, we focus on understanding the teaching assistants’ perspectives on the quality of AI-generated feedback and how they may or may not utilize AI feedback in their own workflows. We situate our work in a foundational college Economics class, which has frequent short essay assignments. We developed an LLM-powered feedback engine that generates feedback on students’ essays based on grading rubrics used by the teaching assistants (TAs). To ensure that TAs can meaningfully critique and engage with the AI feedback, we had them complete their regular grading jobs. For a randomly selected set of essays that they had graded, we used our feedback engine to generate feedback and displayed the feedback as in-text comments in a Word document. We then performed think-aloud studies with 5 TAs over 20 1-hour sessions to have them evaluate the AI feedback, contrast the AI feedback with their handwritten feedback, and share how they envision using the AI feedback if they were offered as suggestions. The study highlights the importance of providing detailed rubrics for AI to generate high-quality feedback for knowledge-intensive essays. TAs considered that using AI feedback as suggestions during their grading could expedite grading, enhance consistency, and improve overall feedback quality. We discuss the importance of decomposing the feedback generation task into steps and presenting intermediate results, in order for TAs to use the AI feedback.more » « less
An official website of the United States government

