skip to main content


Title: Customized Scaffolding for Pre-service Teachers’ Problem-Solving in STEM
This paper explores customized scaffolding for pre-service teachers’ problem-solving in technology and engineering discipline. We used clustering analysis to discover natural groupings of scaffolding characteristics which were used in 144 computer-based scaffolding studies from the previous meta-analysis. We first selected input variables based on our research questions which include different scaffolding characteristics, context of use, education level, and effect size. Next, using a two-step clustering algorithm, we found four clusters based on the predominant scaffolding characteristics and profiled each cluster in terms of scaffolding characteristics and their context of use. The resulting cluster solutions indicate what combination of scaffolding characteristics used in different types of problem-centered learning context would be effective for pre-service teachers’ technology- and engineering-related problem-solving.  more » « less
Award ID(s):
1906059 1251782
NSF-PAR ID:
10178440
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Annual meeting program American Educational Research Association
ISSN:
0163-9676
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    This study indicates the most effective combinations of scaffolding features within computer science and technology education settings. It addresses the research question, “What combinations of scaffolding characteristics, contexts of use, and assessment levels lead to medium and large effect sizes among college‐ and graduate‐level engineering and technology learners?” To do so, studies in which scaffolding led to a medium or large effect size within the context of technology and engineering education were identified within a scaffolding meta‐analysis data set. Next, two‐step cluster analysis in SPSS 24 was used to identify distinct groups of scaffolding attributes tailored to learning computer science at the undergraduate and graduate levels. Input variables included different scaffolding characteristics, the context of use, education level, and effect size. There was an eight‐cluster solution: five clusters were associated with large effect size, two with medium effect size, and one with both medium and large effect size. The three most important predictors were the context in which scaffolding was used, if and how scaffolding is customized over time and the decision rules that govern scaffolding change. Notably, highly effective scaffolding clusters are associated with most levels of each predictor.

     
    more » « less
  2. Despite limited success in broadening participation in engineering with rural and Appalachian youth, there remain challenges such as misunderstandings around engineering careers, misalignments with youth’s sociocultural background, and other environmental barriers. In addition, middle school science teachers may be unfamiliar with engineering or how to integrate engineering concepts into science lessons. Furthermore, teachers interested in incorporating engineering into their curriculum may not have the time or resources to do so. The result may be single interventions such as a professional development workshop for teachers or a career day for students. However, those are unlikely to cause major change or sustained interest development. To address these challenges, we have undertaken our NSF ITEST project titled, Virginia Tech Partnering with Educators and Engineers in Rural Schools (VT PEERS). Through this project, we sought to improve youth awareness of and preparation for engineering related careers and educational pathways. Utilizing regular engagement in engineering-aligned classroom activities and culturally relevant programming, we sought to spark an interest with some students. In addition, our project involves a partnership with teachers, school districts, and local industry to provide a holistic and, hopefully, sustainable influence. By engaging over time we aspired to promote sustainability beyond this NSF project via increased teacher confidence with engineering related activities, continued integration within their science curriculum, and continued relationships with local industry. From the 2017-2020 school years the project has been in seven schools across three rural counties. Each year a grade level was added; that is, the teachers and students from the first year remained for all three years. Year 1 included eight 6th grade science teachers, year 2 added eight 7th grade science teachers, and year 3 added three 8th grade science teachers and a career and technology teacher. The number of students increased from over 500 students in year 1 to over 2500 in year 3. Our three industry partners have remained active throughout the project. During the third and final year in the classrooms, we focused on the sustainable aspects of the project. In particular, on how the intervention support has evolved each year based on data, support requests from the school divisions, and in scaffolding “ownership” of the engineering activities. Qualitative data were used to support our understanding of teachers’ confidence to incorporate engineering into their lessons plans and how their confidence changed over time. Noteworthy, our student data analysis resulted in an instrument change for the third year; however due to COVID, pre and post data was limited to schools who taught on a semester basis. Throughout the project we have utilized the ITEST STEM Workforce Education Helix model to support a pragmatic approach of our research informing our practice to enable an “iterative relationship between STEM content development and STEM career development activities… within the cultural context of schools, with teachers supported by professional development, and through programs supported by effective partnerships.” For example, over the course of the project, scaffolding from the University leading interventions to teachers leading interventions occurred. 
    more » « less
  3. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less
  4. null (Ed.)
    Integrated approaches to teaching science, technology, engineering, and mathematics (commonly referred to as STEM education) in K-12 classrooms have resulted in a growing number of teachers incorporating engineering in their science classrooms. Such changes are a result of shifts in science standards to include engineering as evidenced by the Next Generation Science Standards. To date, 20 states and the District of Columbia have adopted the NGSS and another 24 have adopted standards based on the Framework for K-12 Science Education. Despite the increased presence of engineering and integrated STEM education in K-12 education, there are several concerns to consider. One concern is the limited availability of observation instruments appropriate for instruction where multiple STEM disciplines are present and integrated with one another. Addressing this concern requires the development of a new observation instrument, designed with integrated STEM instruction in mind. An instrument such as this has implications for both research and practice. For example, research using this instrument could help educators compare integrated STEM instruction across grade bands. Additionally, this tool could be useful in the preparation of pre-service teachers and professional development of in-service teachers new to integrated STEM education and formative learning through professional learning communities or classroom coaching. The work presented here describes in detail the development of an integrated STEM observation instrument that can be used for both research and practice. Over a period of approximately 18-months, a team of STEM educators and educational researchers developed a 10-item integrated STEM observation instrument for use in K-12 science and engineering classrooms. The process of developing the instrument began with establishing a conceptual framework, drawing on the integrated STEM research literature, national standards documents, and frameworks for both K-12 engineering education and integrated STEM education. As part of the instrument development process, the project team had access to over 2000 classroom videos where integrated STEM education took place. Initial analysis of a selection of these videos helped the project team write a preliminary draft instrument consisting of 52 items. Through several rounds of revisions, including the construction of detailed scoring levels of the items and collapsing of items that significantly overlapped, and piloting of the instrument for usability, items were added, edited, and/or removed for various reasons. These reasons included issues concerning the intricacy of the observed phenomenon or the item not being specific to integrated STEM education (e.g., questioning). In its final form, the instrument consists of 10 items, each comprising four descriptive levels. Each item is also accompanied by a set of user guidelines, which have been refined by the project team as a result of piloting the instrument and reviewed by external experts in the field. The instrument has shown to be reliable with the project team and further validation is underway. This instrument will be of use to a wide variety of educators and educational researchers looking to understand the implementation of integrated STEM education in K-12 science and engineering classrooms. 
    more » « less
  5. null (Ed.)
    Integrated approaches to teaching science, technology, engineering, and mathematics (commonly referred to as STEM education) in K-12 classrooms have resulted in a growing number of teachers incorporating engineering in their science classrooms. Such changes are a result of shifts in science standards to include engineering as evidenced by the Next Generation Science Standards. To date, 20 states and the District of Columbia have adopted the NGSS and another 24 have adopted standards based on the Framework for K-12 Science Education. Despite the increased presence of engineering and integrated STEM education in K-12 education, there are several concerns to consider. One concern is the limited availability of observation instruments appropriate for instruction where multiple STEM disciplines are present and integrated with one another. Addressing this concern requires the development of a new observation instrument, designed with integrated STEM instruction in mind. An instrument such as this has implications for both research and practice. For example, research using this instrument could help educators compare integrated STEM instruction across grade bands. Additionally, this tool could be useful in the preparation of pre-service teachers and professional development of in-service teachers new to integrated STEM education and formative learning through professional learning communities or classroom coaching. The work presented here describes in detail the development of an integrated STEM observation instrument - the STEM Observation Protocol (STEM-OP) - that can be used for both research and practice. Over a period of approximately 18-months, a team of STEM educators and educational researchers developed a 10-item integrated STEM observation instrument for use in K-12 science and engineering classrooms. The process of developing the STEM-OP began with establishing a conceptual framework, drawing on the integrated STEM research literature, national standards documents, and frameworks for both K-12 engineering education and integrated STEM education. As part of the instrument development process, the project team had access to over 2000 classroom videos where integrated STEM education took place. Initial analysis of a selection of these videos helped the project team write a preliminary draft instrument consisting of 79 items. Through several rounds of revisions, including the construction of detailed scoring levels of the items and collapsing of items that significantly overlapped, and piloting of the instrument for usability, items were added, edited, and/or removed for various reasons. These reasons included issues concerning the intricacy of the observed phenomenon or the item not being specific to integrated STEM education (e.g., questioning). In its final form, the STEM-OP consists of 10 items, each comprising four descriptive levels. Each item is also accompanied by a set of user guidelines, which have been refined by the project team as a result of piloting the instrument and reviewed by external experts in the field. The instrument has shown to be reliable with the project team and further validation is underway. The STEM-OP will be of use to a wide variety of educators and educational researchers looking to understand the implementation of integrated STEM education in K-12 science and engineering classrooms. 
    more » « less