skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Deep Learning Epilepsy Detection Challenge: Design, Implementation, and Test of a New Crowd-Sourced AI Challenge Ecosystem
The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge.  more » « less
Award ID(s):
1827565
PAR ID:
10199670
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; « less
Date Published:
Journal Name:
Challenges in Machine Learning Competitions for All (CiML)
Volume:
1
Issue:
1
Page Range / eLocation ID:
1-3
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We identify and describe episodes of sensemaking around challenges in modern Artificial-Intelligence (AI)-based systems development that emerged in projects carried out by IBM and client companies. All projects used IBM Watson as the development platform for building tailored AI-based solutions to support workers or customers of the client companies. Yet, many of the projects turned out to be significantly more challenging than IBM and its clients had expected. The analysis reveals that project members struggled to establish reliable meanings about the technology, the project, context, and data to act upon. The project members report multiple aspects of the projects that they were not expecting to need to make sense of yet were problematic. Many issues bear upon the current-generation AI’s inherent characteristics, such as dependency on large data sets and continuous improvement as more data becomes available. Those characteristics increase the complexity of the projects and call for balanced mindfulness to avoid unexpected problems. 
    more » « less
  2. null (Ed.)
    Given the importance of broadening participation in the field of computing, goals of supporting personal expression and developing a sense of belonging must live alongside the goals of conceptual knowledge and developing disciplinary expertise. Integrating opportunities for students to be creative in how they enact computing ideas plays an important role when designing curricula. We examine how student creativity, as expressed through theme and the use of costumes, backdrops, and narrative in Scratch projects, is affected by using a themed starter project. Starter projects are Scratch projects that include a set of sprites and backdrops aligned to a theme (e.g. baseball), but no code. Using within-group and between- group comparisons, we establish a baseline of what students do when they are given a starter project and explore how their projects differ in the absence of a starter project. This work contributes to our understanding of the impacts of structured elements within open-ended learning tasks and how we can design computer science learning experiences for students that promote opportunities for self-expression while engaging them in computing. 
    more » « less
  3. Effective fraud prevention and participant validation are essential for ensuring data quality in today's highly-digitized research landscape. Increasingly sophisticated bots and high levels of fraudulent participants have generated a need for more complex and nuanced methods to combat fraudulent activity. In this paper, we share our experiences with fraudulent survey responses, which we encountered in our work around abortion storytelling, and the multi-stage protocol that we developed to validate participants. We found that effective fraud prevention should start early and include a variety of flagging methods to encourage holistic pattern-searching in data. Researchers should overestimate the amount of time they will need to validate participants and consider asking participants to assist in the validation process. We encourage researchers to be transparent about the interpretive nature of this work. To this end, we contribute a Participant Validation Guide in supplemental materials for community members to adapt in their own practices. 
    more » « less
  4. Communications infrastructures and compute resources are critical to enabling advanced science research projects. Science cyberinfrastructures must meet clear performance requirements, must be adjustable to changing requirements and must facilitate reproducibility. These characteristics can be met by a programmable infrastructure with guaranteed resources such as the BRIDGES infrastructure enabling cross Atlantic research projects. While programmability should be a foundational design principle for research cyberinfrastructures, by itself might not be sufficient to enabling scientists who have no or limited experience with advanced IT technologies operate their testbeds independent of IT support teams. The trend of offering “no code” platforms enabling users without IT core competency to achieve business goals should manifest itself in the context of research and educational infrastructures as well. In this paper we describe the architecture of a “no code” platform which would enable scientists to easily configure and modify a programmable infrastructure by using a large language model-based interface integrated with the composable services language of the infrastructure. The BRIDGES testbed is used as an example for such an integration where the functionality benefits projects operated by large, diverse teams. 
    more » « less
  5. Our world’s complex challenges increase the need for those entering STEAM (Science, Technology, Engineering, Arts, and Math) disciplines to be able to creatively approach and collaboratively address wicked problems – complex problems with no “right” answer that span disciplines. Hackathons are environments that leverage problem-based learning practices so student teams can solve problems creatively and collaboratively by developing a solution to given challenges using engineering and computer science knowledge, skills, and abilities. The purpose of this paper is to offer a framework for interdisciplinary hackathon challenge development, as well as provide resources to aid interdisciplinary teams in better understanding the context and needs of a hackathon to evaluate and refine hackathon challenges. Three cohorts of interdisciplinary STEAM researchers were observed and interviewed as they collaboratively created a hackathon challenge incorporating all cohort-member disciplines for an online high school hackathon. The observation data and interview transcripts were analyzed using thematic analysis to distill the processes cohorts underwent and resources that were necessary for successfully creating a hackathon challenge. Through this research we found that the cohorts worked through four sequential stages as they collaborated to create a hackathon challenge. We detail the stages and offer them as a framework for future teams who seek to develop an interdisciplinary hackathon challenge. Additionally, we found that all cohorts lacked the knowledge and experience with hackathons to make fully informed decisions related to the challenge’s topic, scope, outcomes, etc. In response, this manuscript offers five hackathon quality considerations and three guiding principles for challenge developers to best meet the needs and goals of hackathon sponsors and participants. 
    more » « less