skip to main content


Title: Barriers instructors experience in adopting active learning: Instrument development
Abstract Background

Despite well‐documented benefits, instructor adoption of active learning has been limited in engineering education. Studies have identified barriers to instructors’ adoption of active learning, but there is no well‐tested instrument to measure instructors perceptions of these barriers.

Purpose

We developed and tested an instrument to measure instructors’ perceptions of barriers to adopting active learning and identify the constructs that coherently categorize those barriers.

Method

We used a five‐phase process to develop an instrument to measure instructors’ perceived barriers to adopting active learning. In Phase 1, we built upon the Faculty Instructional Barriers and Identity Survey (FIBIS) to create a draft instrument. In Phases 2 and 3, we conducted exploratory factor analysis (EFA) on an initial 45‐item instrument and a refined 21‐item instrument, respectively. We conducted confirmatory factor analysis (CFA) in Phases 4 and 5 to test the factor structure identified in Phases 2 and 3.

Results

Our final instrument consists of 17 items and four factors: (1) student preparation and engagement; (2) instructional support; (3) instructor comfort and confidence; and (4) institutional environment/rewards. Instructor responses indicated that time considerations do not emerge as a standalone factor.

Conclusions

Our 17‐item instrument exhibits a sound factor structure and is reliable, enabling the assessment of perceived barriers to adopting active learning in different contexts. The four factors align with an existing model of instructional change in science, technology, engineering, and mathematics (STEM). Although time is a substantial instructor concern that did not comprise a standalone factor, it is closely related to multiple constructs in our final model.

 
more » « less
Award ID(s):
1821092 1821488
NSF-PAR ID:
10469719
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Journal of Engineering Education
Volume:
112
Issue:
4
ISSN:
1069-4730
Format(s):
Medium: X Size: p. 1079-1108
Size(s):
["p. 1079-1108"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Background

    Many institutional and departmentally focused change efforts have sought to improve teaching in STEM through the promotion of evidence-based instructional practices (EBIPs). Even with these efforts, EBIPs have not become the predominant mode of teaching in many STEM departments. To better understand institutional change efforts and the barriers to EBIP implementation, we developed the Cooperative Adoption Factors Instrument (CAFI) to probe faculty member characteristics beyond demographic attributes at the individual level. The CAFI probes multiple constructs related to institutional change including perceptions of the degree of mutual advantage of taking an action (strategic complements), trust and interconnectedness among colleagues (interdependence), and institutional attitudes toward teaching (climate).

    Results

    From data collected across five STEM fields at three large public research universities, we show that the CAFI has evidence of internal structure validity based on exploratory and confirmatory factor analysis. The scales have low correlations with each other and show significant variation among our sampled universities as demonstrated by ANOVA. We further demonstrate a relationship between the strategic complements and climate factors with EBIP adoption through use of a regression analysis. In addition to these factors, we also find that indegree, a measure of opinion leadership, correlates with EBIP adoption.

    Conclusions

    The CAFI uses the CACAO model of change to link the intended outcome of EBIP adoption with perception of EBIPs as mutually reinforcing (strategic complements), perception of faculty having their fates intertwined (interdependence), and perception of institutional readiness for change (climate). Our work has established that the CAFI is sensitive enough to pick up on differences between three relatively similar institutions and captures significant relationships with EBIP adoption. Our results suggest that the CAFI is likely to be a suitable tool to probe institutional change efforts, both for change agents who wish to characterize the local conditions on their respective campuses to support effective planning for a change initiative and for researchers who seek to follow the progression of a change initiative. While these initial findings are very promising, we also recommend that CAFI be administered in different types of institutions to examine the degree to which the observed relationships hold true across contexts.

     
    more » « less
  2. The purpose of this study is to develop an instrument to measure student perceptions about the learning experiences in their online undergraduate engineering courses. Online education continues to grow broadly in higher education, but the movement toward acceptance and comprehensive utilization of online learning has generally been slower in engineering. Recently, however, there have been indicators that this could be changing. For example, ABET has accredited online undergraduate engineering degrees at Stony Brook University and Arizona State University (ASU), and an increasing number of other undergraduate engineering programs also offer online courses. During this period of transition in engineering education, further investigation about the online modality in the context of engineering education is needed, and survey instrumentation can support such investigations. The instrument presented in this paper is grounded in a Model for Online Course-level Persistence in Engineering (MOCPE), which was developed by our research team by combining two motivational frameworks used to study student persistence: the Expectancy x Value Theory of Achievement Motivation (EVT), and the ARCS model of motivational design. The initial MOCPE instrument contained 79 items related to students’ perceptions about the characteristics of their courses (i.e., the online learning management system, instructor practices, and peer support), expectancies of course success, course task values, perceived course difficulties, and intention to persist in the course. Evidence of validity and reliability was collected using a three-step process. First, we tested face and content validity of the instrument with experts in online engineering education and online undergraduate engineering students. Next, the survey was administered to the online undergraduate engineering student population at a large, Southwestern public university, and an exploratory factor analysis (EFA) was conducted on the responses. Lastly, evidence of reliability was obtained by computing the internal consistency of each resulting scale. The final instrument has seven scales with 67 items across 10 factors. The Cronbach alpha values for these scales range from 0.85 to 0.97. The full paper will provide complete details about the development and psychometric evaluation of the instrument, including evidence of and reliability. The instrument described in this paper will ultimately be used as part of a larger, National Science Foundation-funded project investigating the factors influencing online undergraduate engineering student persistence. It is currently being used in the context of this project to conduct a longitudinal study intended to understand the relationships between the experiences of online undergraduate engineering students in their courses and their intentions to persist in the course. We anticipate that the instrument will be of interest and use to other engineering education researchers who are also interested in studying the population of online students. 
    more » « less
  3. Abstract Background

    Numerous studies show that active and engaging classrooms help students learn and persist in college, but adoption of new teaching practices has been slow. Professional development programs encourage instructors to implement new teaching methods and change the status quo in STEM undergraduate teaching, and structured observations of classrooms can be used in multiple ways to describe and assess this instruction. We addressed the challenge of measuring instructional change with observational protocols, data that often do not lend themselves easily to statistical comparisons. Challenges using observational data in comparative research designs include lack of descriptive utility for holistic measures and problems related to construct representation, non-normal distributions and Type-I error inflation for segmented measures.

    Results

    We grouped 790 mathematics classes from 74 instructors using Latent Profile Analysis (a statistical clustering technique) and found four reliable categories of classes. Based on this grouping we proposed a simple proportional measure we called Proportion Non-Didactic Lecture (PND). The measure aggregated the proportions of interactive to lecture classes for each instructor. We tested the PND and a measure derived from the Reformed Teaching Observation Protocol (RTOP) with data from a professional development study. The PND worked in simple hypothesis tests but lacked some statistical power due to possible ceiling effects. However, the PND provided effective descriptions of changes in instructional approaches from pre to post. In tandem with examining the proportional measure, we also examined the RTOP-Sum, an existing outcome measure used in comparison studies. The measure is based on the aggregated items in a holistic observational protocol. As an aggregate measure we found it to be highly reliable, correlated highly with the PND, and had more statistical power than the PND. However, the RTOP measure did not provide the thick descriptions of teaching afforded by the PND.

    Conclusions

    Findings suggest that useful dependent measures can be derived from both segmented and holistic observational measures. Both have strengths and weaknesses: measures from segmented data are best at describing changes in teaching, while measures derived from the RTOP have more statistical power. Determining the validity of these measures is important for future use of observational data in comparative studies.

     
    more » « less
  4. Abstract Background

    In college science laboratory and discussion sections, student-centered active learning strategies have been implemented to improve student learning outcomes and experiences. Research has shown that active learning activities can increase student anxiety if students fear that they could be negatively evaluated by their peers. Error framing (i.e., to frame errors as natural and beneficial to learning) is proposed in the literature as a pedagogical tool to reduce student anxiety. However, little research empirically explores how an instructor can operationalize error framing and how error framing is perceived by undergraduate students. To bridge the gap in the literature, we conducted a two-stage study that involved science graduate teaching assistants (GTAs) and undergraduate students. In stage one, we introduced cold calling (i.e., calling on non-volunteering students) and error framing to 12 chemistry and 11 physics GTAs. Cold calling can increase student participation but may increase student anxiety. Error framing has the potential to mitigate student anxiety when paired with cold calling. GTAs were then tasked to rehearse cold calling paired with error framing in a mixed-reality classroom simulator. We identified GTA statements that aligned with the definition of error framing. In stage two, we selected a few example GTA error framing statements and interviewed 13 undergraduate students about their perception of those statements.

    Results

    In the simulator, all the GTAs rehearsed cold calling multiple times while only a few GTAs made error framing statements. A thematic analysis of GTAs’ error framing statements identified ways of error indication (i.e., explicit and implicit) and framing (i.e., natural, beneficial, and positive acknowledgement). Undergraduate student interviews revealed specific framing and tone that are perceived as increasing or decreasing student comfort in participating in classroom discourse. Both undergraduate students and some GTAs expressed negative opinions toward responses that explicitly indicate student mistakes. Undergraduate students’ perspectives also suggest that error framing should be implemented differently depending on whether errors have already occurred.

    Conclusion

    Error framing is challenging for science GTAs to implement. GTAs’ operationalizations of error framing in the simulator and undergraduate students’ perceptions contribute to defining and operationalizing error framing for instructional practice. To increase undergraduate student comfort in science classroom discourse, GTAs can use implicit error indication. In response to students’ incorrect answers, GTAs can positively frame students’ specific ideas rather than discussing broadly how errors are natural or beneficial.

     
    more » « less
  5. null (Ed.)
    Student perceptions of the complete online transition of two CS courses in response to the COVID-19 pandemic Due to the COVID-19 pandemic, universities across the globe switched from traditional Face-to-Face (F2F) course delivery to completely online. Our university declared during our Spring break that students would not return to campus, and that all courses must be delivered fully online starting two weeks later. This was challenging to both students and instructors. In this evidence-based practice paper, we present results of end-of-semester student surveys from two Spring 2020 CS courses: a programming intensive CS2 course, and a senior theory course in Formal Languages and Automata (FLA). Students indicated course components they perceived as most beneficial to their learning, before and then after the online transition, and preferences for each regarding online vs. F2F. By comparing student reactions across courses, we gain insights on which components are easily adapted to online delivery, and which require further innovation. COVID was unfortunate, but gave a rare opportunity to compare students’ reflections on F2F instruction with online instructional materials for half a semester vs. entirely online delivery of the same course during the second half. The circumstances are unique, but we were able to acquire insights for future instruction. Some course components were perceived to be more useful either before or after the transition, and preferences were not the same in the two courses, possibly due to differences in the courses. Students in both courses found prerecorded asynchronous lectures significantly less useful than in-person lectures. For CS2, online office hours were significantly less useful than in-person office hours, but we found no significant difference in FLA. CS2 students felt less supported by their instructor after the online transition, but no significant difference was indicated by FLA students. FLA students found unproctored online exams offered through Canvas more stressful than in-person proctored exams, but the opposite was indicated by CS2 students. CS2 students indicated that visual materials from an eTextbook were more useful to them after going online than before, but FLA students indicated no significant difference. Overall, students in FLA significantly preferred the traditional F2F version of the course, while no significant difference was detected for CS2 students. We did not find significant effects from gender on the preference of one mode over the other. A serendipitous outcome was learning that some changes forced by circumstance should be considered for long term adoption. Offering online lab sessions and online exams where the questions are primarily multiple choice are possible candidates. However, we found that students need to feel the presence of their instructor to feel properly supported. To determine what course components need further improvement before transitioning to fully online mode, we computed a logistic regression model. The dependent variable is the student's preference for F2F or fully online. The independent variables are the course components before and after the online transition. For both courses, in-person lectures were a significant factor negatively affecting students' preferences of the fully online mode. Similarly, for CS2, in-person labs and in-person office hours were significant factors pushing students’ preferences toward F2F mode. 
    more » « less