skip to main content


Title: Problem Solving in Genetics: Content Hints Can Help
Problem solving is an integral part of doing science, yet it is challenging for students in many disciplines to learn. We explored student success in solving genetics problems in several genetics content areas using sets of three consecutive questions for each content area. To promote improvement, we provided students the choice to take a content-focused prompt, termed a “content hint,” during either the second or third question within each content area. Overall, for students who answered the first question in a content area incorrectly, the content hints helped them solve additional content-matched problems. We also examined students’ descriptions of their problem solving and found that students who improved following a hint typically used the hint content to accurately solve a problem. Students who did not improve upon receipt of the content hint demonstrated a variety of content-specific errors and omissions. Overall, ultimate success in the practice assignment (on the final question of each topic) predicted success on content-matched final exam questions, regardless of initial practice performance or initial genetics knowledge. Our findings suggest that some struggling students may have deficits in specific genetics content knowledge, which when addressed, allow the students to successfully solve challenging genetics problems.  more » « less
Award ID(s):
1711348
NSF-PAR ID:
10167083
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
CBE—Life Sciences Education
Volume:
18
Issue:
2
ISSN:
1931-7913
Page Range / eLocation ID:
ar23
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Practice plays a critical role in learning engineering dynamics. Typical practice in a dynamics course involves solving textbook problems. These problems can impose great cognitive load on underprepared students because they have not mastered constituent knowledge and skills required for solving whole problems. For these students, learning can be improved by being engaged in deliberate practice. Deliberate practice refers to a type of practice aimed at improving specific constituent knowledge or skills. Compared to solving whole problems requiring the simultaneous use of multiple constituent skills, deliberate practice is usually focused on one component skill at a time, which results in less cognitive load and more specificity. Contemporary theories of expertise development have highlighted the influence of deliberate practice (DP) on achieving exceptional performance in sports, music, and various professional fields. Concurrently, there is an emerging method for improving learning efficiency of novices by combining deliberate practice with cognitive load theory (CLT), a cognitive-architecture-based theory for instructional design. Mechanics is a foundation for most branches of engineering. It serves to develop problem-solving skills and consolidate understanding of other subjects, such as applied mathematics and physics. Mechanics has been a challenging subject. Students need to understand governing principles to gain conceptual knowledge and acquire procedural knowledge to apply these principles to solve problems. Due to the difficulty in developing conceptual and procedural knowledge, mechanics courses are among those that receive high DFW rates (percentage of students receiving a grade of D or F or Withdrawing from a course), and students are more likely to leave engineering after taking mechanics courses. Deliberate practice can help novices develop good representations of the knowledge needed to produce superior problem solving performance. The goal of the present study is to develop deliberate practice techniques to improve learning effectiveness and to reduce cognitive load. Our pilot study results revealed that the student mental effort scores were negatively correlated with their knowledge test scores with r = -.29 (p < .05) after using deliberate practice strategies. This supports the claim that deliberate practice can improve student learning while reducing cognitive load. In addition, the higher the students’ knowledge test scores, the lower their mental effort was when taking the tests. In other words, the students who used deliberate practice strategies had better learning results with less cognitive load. To design deliberate practice, we often need to analyze students’ persistent problems caused by faulty mental models, also referred to as an intuitive mental model, and misconceptions. In this study, we continue to conduct an in-depth diagnostic process to identify students’ common mistakes and associated intuitive mental models. We then use the results to develop deliberate practice problems aimed at changing students’ cognitive strategies and mental models. 
    more » « less
  2. Practice plays a critical role in learning engineering dynamics. Typical practice in a dynamics course involves solving textbook problems. These problems can impose great cognitive load on underprepared students because they have not mastered constituent knowledge and skills required for solving whole problems. For these students, learning can be improved by being engaged in deliberate practice. Deliberate practice refers to a type of practice aimed at improving specific constituent knowledge or skills. Compared to solving whole problems requiring the simultaneous use of multiple constituent skills, deliberate practice is usually focused on one component skill at a time, which results in less cognitive load and more specificity. Contemporary theories of expertise development have highlighted the influence of deliberate practice (DP) on achieving exceptional performance in sports, music, and various professional fields. Concurrently, there is an emerging method for improving learning efficiency of novices by combining deliberate practice with cognitive load theory (CLT), a cognitive-architecture-based theory for instructional design. Mechanics is a foundation for most branches of engineering. It serves to develop problem-solving skills and consolidate understanding of other subjects, such as applied mathematics and physics. Mechanics has been a challenging subject. Students need to understand governing principles to gain conceptual knowledge and acquire procedural knowledge to apply these principles to solve problems. Due to the difficulty in developing conceptual and procedural knowledge, mechanics courses are among those that receive high DFW rates (percentage of students receiving a grade of D or F or Withdrawing from a course), and students are more likely to leave engineering after taking mechanics courses. Deliberate practice can help novices develop good representations of the knowledge needed to produce superior problem solving performance. The goal of the present study is to develop deliberate practice techniques to improve learning effectiveness and to reduce cognitive load. Our pilot study results revealed that the student mental effort scores were negatively correlated with their knowledge test scores with r = -.29 (p < .05) after using deliberate practice strategies. This supports the claim that deliberate practice can improve student learning while reducing cognitive load. In addition, the higher the students’ knowledge test scores, the lower their mental effort was when taking the tests. In other words, the students who used deliberate practice strategies had better learning results with less cognitive load. To design deliberate practice, we often need to analyze students’ persistent problems caused by faulty mental models, also referred to as an intuitive mental model, and misconceptions. In this study, we continue to conduct an in-depth diagnostic process to identify students’ common mistakes and associated intuitive mental models. We then use the results to develop deliberate practice problems aimed at changing students’ cognitive strategies and mental models. 
    more » « less
  3. Metacognition is the understanding of your own knowledge including what knowledge you do not have and what knowledge you do have. This includes knowledge of strategies and regulation of one’s own cognition. Studying metacognition is important because higher-order thinking is commonly used, and problem-solving skills are positively correlated with metacognition. A positive previous disposition to metacognition can improve problem-solving skills. Metacognition is a key skill in design and manufacturing, as teams of engineers must solve complex problems. Moreover, metacognition increases individual and team performance and can lead to more original ideas. This study discusses the assessment of metacognitive skills in engineering students by having the students participate in hands-on and virtual reality activities related to design and manufacturing. The study is guided by two research questions: (1) do the proposed activities affect students’ metacognition in terms of monitoring, awareness, planning, self-checking, or strategy selection, and (2) are there other components of metacognition that are affected by the design and manufacturing activities? The hypothesis is that the participation in the proposed activities will improve problem-solving skills and metacognitive awareness of the engineering students. A total of 34 undergraduate students participated in the study. Of these, 32 were male and 2 were female students. All students stated that they were interested in pursuing a career in engineering. The students were divided into two groups with the first group being the initial pilot run of the data. In this first group there were 24 students, in the second group there were 10 students. The groups’ demographics were nearly identical to each other. Analysis of the collected data indicated that problem-solving skills contribute to metacognitive skills and may develop first in students before larger metacognitive constructs of awareness, monitoring, planning, self-checking, and strategy selection. Based on this, we recommend that the problem-solving skills and expertise in solving engineering problems should be developed in students before other skills emerge or can be measured. While we are sure that the students who participated in our study have awareness as well as the other metacognitive skills in reading, writing, science, and math, they are still developing in relation to engineering problems. 
    more » « less
  4. Several consensus reports cite a critical need to dramatically increase the number and diversity of STEM graduates over the next decade. They conclude that a change to evidence-based instructional practices, such as concept-based active learning, is needed. Concept-based active learning involves the use of activity-based pedagogies whose primary objectives are to make students value deep conceptual understanding (instead of only factual knowledge) and then to facilitate their development of that understanding. Concept-based active learning has been shown to increase academic engagement and student achievement, to significantly improve student retention in academic programs, and to reduce the performance gap of underrepresented students. Fostering students' mastery of fundamental concepts is central to real world problem solving, including several elements of engineering practice. Unfortunately, simply proving that these instructional practices are more effective than traditional methods for promoting student learning, for increasing retention in academic programs, and for improving ability in professional practice is not enough to ensure widespread pedagogical change. In fact, the biggest challenge to improving STEM education is not the need to develop more effective instructional practices, but to find ways to get faculty to adopt the evidence-based pedagogies that already exist. In this project we seek to propagate the Concept Warehouse, a technological innovation designed to foster concept-based active learning, into Mechanical Engineering (ME) and to study student learning with this tool in five diverse institutional settings. The Concept Warehouse (CW) is a web-based instructional tool that we developed for Chemical Engineering (ChE) faculty. It houses over 3,500 ConcepTests, which are short questions that can rapidly be deployed to engage students in concept-oriented thinking and/or to assess students’ conceptual knowledge, along with more extensive concept-based active learning tools. The CW has grown rapidly during this project and now has over 1,600 faculty accounts and over 37,000 student users. New ConcepTests were created during the current reporting period; the current numbers of questions for Statics, Dynamics, and Mechanics of Materials are 342, 410, and 41, respectively. A detailed review process is in progress, and will continue through the no-cost extension year, to refine question clarity and to identify types of new questions to fill gaps in content coverage. There have been 497 new faculty accounts created after June 30, 2018, and 3,035 unique students have answered these mechanics questions in the CW. We continue to analyze instructor interviews, focusing on 11 cases, all of whom participated in the CW Community of Practice (CoP). For six participants, we were able to compare use of the CW both before and after participating in professional development activities (workshops and/or a community or practice). Interview results have been coded and are currently being analyzed. To examine student learning, we recruited faculty to participate in deploying four common questions in both statics and dynamics. In statics, each instructor agreed to deploy the same four questions (one each for Rigid Body Equilibrium, Trusses, Frames, and Friction) among their overall deployments of the CW. In addition to answering the question, students were also asked to provide a written explanation to explain their reasoning, to rate the confidence of their answers, and to rate the degree to which the questions were clear and promoted deep thinking. The analysis to date has resulted in a Work-In-Progress paper presented at ASEE 2022, reporting a cross-case comparison of two instructors and a Work-In-Progress paper to be presented at ASEE 2023 analyzing students’ metacognitive reflections of concept questions. 
    more » « less
  5. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less