- Award ID(s):
- NSF-PAR ID:
- Date Published:
- Journal Name:
- Journal of expertise
- Page Range / eLocation ID:
- 190 - 207
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
null (Ed.)The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis  to the development of brain-machine-interfaces . While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction . In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) . TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation . In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio  is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) , all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow  and PyTorch , was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries , and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge.more » « less
Although teamwork is being integrated throughout engineering education because of the perceived benefits of teams, the construct of psychological safety has been largely ignored in engineering research. This omission is unfortunate, because psychological safety reflects collective perceptions about how comfortable team members feel in sharing their perspectives and it has been found to positively impact team performance in samples outside of engineering. Engineering team research has also been crippled by “snap-shot” methodologies and the resulting lack of investigation into the dynamic changes that happen within a team over course projects. This is problematic, because we do not know when, how, or what type of interventions are needed to effectively improve “t-shaped” engineering skills like teamwork, communication, and engaging successfully in a diverse team. In light of these issues, the goal of the current study was to understand how psychological safety might be measured practically and reliably in engineering student teams over time. In addition, we sought to identify the trajectory of psychological safety for engineering design student teams and identify the potential factors that impact the building and waning of psychological safety in these teams. This was accomplished through a 4-week study with 12 engineering design teams where data was captured at six time points. The results of this study present some of the first evidence on the reliability of psychological safety in engineering student populations. The results also help begin to answer some difficult fundamental questions on supporting team performance in engineering education.more » « less
The emphasis on conceptual learning and the development of adaptive instructional design are both emerging areas in science and engineering education. Instructors are writing their own conceptual questions to promote active learning during class and utilizing pools of these questions in assessments. For adaptive assessment strategies, these questions need to be rated based on difficulty level (DL). Historically DL has been determined from the performance of a suitable number of students. The research study reported here investigates whether instructors can save time by predicting DL of newly made conceptual questions without the need for student data. In this paper, we report on the development of one component in an adaptive learning module for materials science – specifically on the topic of crystallography. The summative assessment element consists of five DL scales and 15 conceptual questions This adaptive assessment directs students based on their previous performances and the DL of the questions. Our five expert participants are faculty members who have taught the introductory Materials Science course multiple times. They provided predictions for how many students would answer each question correctly during a two-step process. First, predictions were made individually without an answer key. Second, experts had the opportunity to revise their predictions after being provided an answer key in a group discussion. We compared expert predictions with actual student performance using results from over 400 students spanning multiple courses and terms. We found no clear correlation between expert predictions of the DL and the measured DL from students. Some evidence shows that discussion during the second step made expert predictions closer to student performance. We suggest that, in determining the DL for conceptual questions, using predictions of the DL by experts who have taught the course is not a valid route. The findings in this paper can be applied to assessments in both in-person, hybrid, and online settings and is applicable to subject matter beyond materials science.more » « less
Research Problem: Climate change is one of the most important environmental, social, and economic issues of our time. The documented impacts of climate change are extensive. Climate change education can help students link this global issue to students’ everyday lives, foster a climate-literate public, and serve as motivation for action. Yet prior to instructional interventions, the first step in promoting conceptual change is to describe expert and novice conceptions or mental models of the topic (Treagust and Duit 2009). Published studies about students’ climate change knowledge primarily stem from the earth and atmospheric sciences, and focus on students’ knowledge of the mechanisms causing global warming and of the abiotic systems important to climate change. Limited research has documented undergraduate students’ knowledge about the biotic impacts of climate change. Our goal was to describe student/novice and instructor/expert conceptual knowledge of the biotic impacts of climate change. Research Design: We conducted interviews with 30 undergraduates and 10 instructors who are students or teaching in Introductory Biology or Ecology classes. Our semi-structured interview protocol probed participants’ conceptions of the mechanisms, outcomes and levels of impact that climate change has on the biological world. Participants were taken from varying institutions across the US (Baccalaureate, Master’s, and Doctoral). Analyses: Following transcription of all interviews, we used thematic coding analysis to describe novice and expert conceptions of the biotic impacts to climate change. We also compared across interview populations to describe how novice and expert conceptions compare. Contribution: Our findings contribute understanding of biology student and expert knowledge of the biotic impacts of climate change and contribute more broadly to the field of climate science where research on understanding of the biotic impacts of climate change is minimal. Our work will represent a novel perspective because most climate education research at the university-level has focused on earth and atmospheric science students. Further, this work is the first step in a larger project that aims to develop valid and reliable concept inventory related to biotic impacts of climate change – an instrument sorely needed to properly address improvements to climate change education.more » « less
The COVID‐19 disease pandemic is one of the most pressing global health issues of our time. Nevertheless, responses to the pandemic exhibit a stark ideological divide, with political conservatives (versus liberals/progressives) expressing less concern about the virus and less behavioral compliance with efforts to combat it. Drawing from decades of research on the psychological underpinnings of ideology, in four studies (total
N= 4441) we examine the factors that contribute to the ideological gap in pandemic response—across domains including personality (e.g., empathic concern), attitudes (e.g., trust in science), information (e.g., COVID‐19 knowledge), vulnerability (e.g., preexisting medical conditions), demographics (e.g., education, income) and environment (e.g., local COVID‐19 infection rates). This work provides insight into the most proximal drivers of this ideological divide and also helps fill a long‐standing theoretical and empirical gap regarding how these various ideological differences shape responses to complex real‐world sociopolitical events. Among our key findings are the central role of attitude‐ and belief‐related factors (e.g., trust in science and trust in Trump)—and the relatively weaker influence of several domain‐general personality factors (empathic concern, disgust sensitivity, conspiratorial ideation). We conclude by considering possible explanations for these findings and their broader implications for our understanding of political ideology. Highlights
Stark ideological differences exist across a wide range of attitudinal and behavioral indices of pandemic response, with more conservative individuals reliably exhibiting less concern about the virus. These findings illustrate the extent to which the pandemic has become politicized.
A range of factors contribute to this ideological gap in pandemic response, but some are substantially more important than others.
Several factors that have received attention in public and academic discourse about the pandemic appear to contribute little, if at all, to the ideological divide. These include news following, scientific literacy, perceived social norms, and knowledge about the virus.
The most critical factors appear to be trust in scientists and trust in Trump, which further highlights the politicization of COVID‐19 and, importantly, the antagonistic nature of these two beliefs. Efforts to change and, especially, disentangle these two attitudes have the potential to be effective interventions.