skip to main content

Title: Problem Solving in Genetics: Content Hints Can Help
Problem solving is an integral part of doing science, yet it is challenging for students in many disciplines to learn. We explored student success in solving genetics problems in several genetics content areas using sets of three consecutive questions for each content area. To promote improvement, we provided students the choice to take a content-focused prompt, termed a “content hint,” during either the second or third question within each content area. Overall, for students who answered the first question in a content area incorrectly, the content hints helped them solve additional content-matched problems. We also examined students’ descriptions of their problem solving and found that students who improved following a hint typically used the hint content to accurately solve a problem. Students who did not improve upon receipt of the content hint demonstrated a variety of content-specific errors and omissions. Overall, ultimate success in the practice assignment (on the final question of each topic) predicted success on content-matched final exam questions, regardless of initial practice performance or initial genetics knowledge. Our findings suggest that some struggling students may have deficits in specific genetics content knowledge, which when addressed, allow the students to successfully solve challenging genetics problems.
Authors:
; ;
Award ID(s):
1711348
Publication Date:
NSF-PAR ID:
10167083
Journal Name:
CBE—Life Sciences Education
Volume:
18
Issue:
2
Page Range or eLocation-ID:
ar23
ISSN:
1931-7913
Sponsoring Org:
National Science Foundation
More Like this
  1. Practice plays a critical role in learning engineering dynamics. Typical practice in a dynamics course involves solving textbook problems. These problems can impose great cognitive load on underprepared students because they have not mastered constituent knowledge and skills required for solving whole problems. For these students, learning can be improved by being engaged in deliberate practice. Deliberate practice refers to a type of practice aimed at improving specific constituent knowledge or skills. Compared to solving whole problems requiring the simultaneous use of multiple constituent skills, deliberate practice is usually focused on one component skill at a time, which results in less cognitive load and more specificity. Contemporary theories of expertise development have highlighted the influence of deliberate practice (DP) on achieving exceptional performance in sports, music, and various professional fields. Concurrently, there is an emerging method for improving learning efficiency of novices by combining deliberate practice with cognitive load theory (CLT), a cognitive-architecture-based theory for instructional design. Mechanics is a foundation for most branches of engineering. It serves to develop problem-solving skills and consolidate understanding of other subjects, such as applied mathematics and physics. Mechanics has been a challenging subject. Students need to understand governing principles to gain conceptual knowledgemore »and acquire procedural knowledge to apply these principles to solve problems. Due to the difficulty in developing conceptual and procedural knowledge, mechanics courses are among those that receive high DFW rates (percentage of students receiving a grade of D or F or Withdrawing from a course), and students are more likely to leave engineering after taking mechanics courses. Deliberate practice can help novices develop good representations of the knowledge needed to produce superior problem solving performance. The goal of the present study is to develop deliberate practice techniques to improve learning effectiveness and to reduce cognitive load. Our pilot study results revealed that the student mental effort scores were negatively correlated with their knowledge test scores with r = -.29 (p < .05) after using deliberate practice strategies. This supports the claim that deliberate practice can improve student learning while reducing cognitive load. In addition, the higher the students’ knowledge test scores, the lower their mental effort was when taking the tests. In other words, the students who used deliberate practice strategies had better learning results with less cognitive load. To design deliberate practice, we often need to analyze students’ persistent problems caused by faulty mental models, also referred to as an intuitive mental model, and misconceptions. In this study, we continue to conduct an in-depth diagnostic process to identify students’ common mistakes and associated intuitive mental models. We then use the results to develop deliberate practice problems aimed at changing students’ cognitive strategies and mental models.« less
  2. Metacognition is the understanding of your own knowledge including what knowledge you do not have and what knowledge you do have. This includes knowledge of strategies and regulation of one’s own cognition. Studying metacognition is important because higher-order thinking is commonly used, and problem-solving skills are positively correlated with metacognition. A positive previous disposition to metacognition can improve problem-solving skills. Metacognition is a key skill in design and manufacturing, as teams of engineers must solve complex problems. Moreover, metacognition increases individual and team performance and can lead to more original ideas. This study discusses the assessment of metacognitive skills in engineering students by having the students participate in hands-on and virtual reality activities related to design and manufacturing. The study is guided by two research questions: (1) do the proposed activities affect students’ metacognition in terms of monitoring, awareness, planning, self-checking, or strategy selection, and (2) are there other components of metacognition that are affected by the design and manufacturing activities? The hypothesis is that the participation in the proposed activities will improve problem-solving skills and metacognitive awareness of the engineering students. A total of 34 undergraduate students participated in the study. Of these, 32 were male and 2 weremore »female students. All students stated that they were interested in pursuing a career in engineering. The students were divided into two groups with the first group being the initial pilot run of the data. In this first group there were 24 students, in the second group there were 10 students. The groups’ demographics were nearly identical to each other. Analysis of the collected data indicated that problem-solving skills contribute to metacognitive skills and may develop first in students before larger metacognitive constructs of awareness, monitoring, planning, self-checking, and strategy selection. Based on this, we recommend that the problem-solving skills and expertise in solving engineering problems should be developed in students before other skills emerge or can be measured. While we are sure that the students who participated in our study have awareness as well as the other metacognitive skills in reading, writing, science, and math, they are still developing in relation to engineering problems.« less
  3. The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert datamore »scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge.« less
  4. Sacristán, A. I. ; Cortés-Zavala, J. C. ; Ruiz-Arias, P. M. (Ed.)
    What impact, if any, do interesting lessons have on the types of questions students ask? To explore this question, we used lesson observations of six teachers from three high schools in the Northeast who were part of a larger study. Lessons come from a range of courses, spanning Algebra through Calculus. After each lesson, students reported interest via lesson experience surveys (Author, 2019). These interest measures were then used to identify each teachers’ highest and lowest interest lessons. The two lessons per teacher allows us to compare across the same set of students per teacher. We compiled 145 student questions and identified whether questions were asked within a group work setting or part of a whole class discussion. Two coders coded 10% of data to improve the rubric for type of students’ questions (what, why, how, and if) and perceived intent (factual, procedural, reasoning, and exploratory). Factual questions asked for definitions or explicit answers. Procedural questions were raised when students looked for algorithms or a solving process. Reasoning questions asked about why procedures worked, or facts were true. Exploratory questions expanded beyond the topic of focus, such as asking about changing the parameters to make sense of a problem. Themore »remaining 90% of data were coded independently to determine interrater reliability (see Landis & Koch, 1977). A Cohen’s Kappa statistic (K=0.87, p<0.001) indicates excellent reliability. Furthermore, both coders reconciled codes before continuing with data analysis. Initial results showed differences between high- and low-interest lessons. Although students raised fewer mathematical questions in high-interest lessons (59) when compared with low-interest lessons (86), high-interest lessons contained more “exploratory” questions (10 versus 6). A chi-square test of independence shows a significant difference, χ2 (3, N = 145) = 12.99, p = .005 for types of students’ questions asked in high- and low-interest lessons. The high-interest lessons had more student questions arise during whole class discussions, whereas low-interest lessons had more student questions during group work. By partitioning each lesson into acts at points where the mathematical content shifted, we were able to examine through how many acts questions remained open. The average number of acts the students’ questions remained unanswered for high-interest lessons (2.66) was higher than that of low-interest lessons (1.68). Paired samples t-tests suggest that this difference is significant t(5)=2.58, p = 0.049. Therefore, student interest in the lesson did appear to impact the type of questions students ask. One possible reason for the differences in student questions is the nature of the lessons students found interesting, which may allow for student freedom to wonder and chase their mathematical ideas. There may be more overall student questions in low-interest lessons because of confusion, but more research is needed to unpack the reasoning behind student questions.« less
  5. To meet the rising demand for computer science (CS) courses, K-12 educators need to be prepared to teach introductory concepts and skills in courses such as Computer Science Principles (CSP), which takes a breadth-first approach to CS and includes topics beyond programming such as data, impacts of computing, and networks. Educators are now also being asked to teach more advanced concepts in courses such as the College Board's Advanced Placement Computer Science A (CSA) course, which focuses on advanced programming using Java and includes topics such as objects, inheritance, arrays, and recursion. Traditional CSA curricula have not used content or pedagogy designed to engage a broad range of learners and support their success. Unlike CSP, which is attracting more underrepresented students to computing as it was designed, CSA continues to enroll mostly male, white, and Asian students [College Board 2019, Ericson 2020, Sax 2020]. In order to expand CS education opportunities, it is crucial that students have an engaging experience in CSA similar to CSP. Well-designed differentiated professional development (PD) that focuses on content and pedagogy is necessary to meet individual teacher needs, to successfully build teacher skills and confidence to teach CSA, and to improve engagement with students [Darling-Hammondmore »2017]. It is critical that as more CS opportunities and courses are developed, teachers remain engaged with their own learning in order to build their content knowledge and refine their teaching practice [CSTA 2020]. CSAwesome, developed and piloted in 2019, offers a College Board endorsed AP CSA curriculum and PD focused on supporting the transition of teachers and students from CSP to CSA. This poster presents preliminary findings aimed at exploring the supports and challenges new-to-CSA high school level educators face when transitioning from teaching an introductory, breadth-first course such as CSP to teaching the more challenging, programming-focused CSA course. Five teachers who completed the online CSAwesome summer 2020 PD completed interviews in spring 2021. The project employed an inductive coding scheme to analyze interview transcriptions and qualitative notes from teachers about their experiences learning, teaching, and implementing CSP and CSA curricula. Initial findings suggest that teachers’ experience in the CSAwesome PD may improve their confidence in teaching CSA, ability to effectively use inclusive teaching practices, ability to empathize with their students, problem-solving skills, and motivation to persist when faced with challenges and difficulties. Teachers noted how the CSAwesome PD provided them with a student perspective and increased feelings of empathy. Participants spoke about the implications of the COVID-19 pandemic on their own learning, student learning, and teaching style. Teachers enter the PD with many different backgrounds, CS experience levels, and strengths, however, new-to-CSA teachers require further PD on content and pedagogy to transition between CSP and CSA. Initial results suggest that the CSAwesome PD may have an impact on long-term teacher development as new-to-CSA teachers who participated indicated a positive impact on their teaching practices, ideologies, and pedagogies.« less