skip to main content


Title: Demonstration of collaborative and interactive workflow-based data analytics in texera
Collaborative data analytics is becoming increasingly important due to the higher complexity of data science, more diverse skills from different disciplines, more common asynchronous schedules of team members, and the global trend of working remotely. In this demo we will show how Texera supports this emerging computing paradigm to achieve high productivity among collaborators with various backgrounds. Based on our active joint projects on the system, we use a scenario of social media analysis to show how a data science task can be conducted on a user friendly yet powerful platform by a multi-disciplinary team including domain scientists with limited coding skills and experienced machine learning experts. We will present how to do collaborative editing of a workflow and collaborative execution of the workflow in Texera. We will focus on data-centric features such as synchronization of operator schemas among the users during the construction phase, and monitoring and controlling the shared runtime during the execution phase.  more » « less
Award ID(s):
2107150
NSF-PAR ID:
10442816
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the VLDB Endowment
Volume:
15
Issue:
12
ISSN:
2150-8097
Page Range / eLocation ID:
3738 to 3741
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    It is critical to teach all learners to program and think through programming. But to do so requires that early childhood teacher candidates learn to teach computer science. This in turn requires novel pedagogy that can both help such teachers learn the needed skills, but also provide a model for their future teaching. In this study, we examined how early childhood teacher candidates learned to program and debug block-based code with and without scaffolding. We aimed to see how approaches to debugging vary between early childhood teacher candidates who were provided debugging scaffolds during block-based programming and those who were not. This qualitative case study focused on 13 undergraduates majoring in early childhood education. Data sources included video recording during debugging, semi-structured interviews, and (in the case of those who used scaffolding) scaffold responses. Research team members coded data independently and then came to consensus. With hypothesis-driven scaffolds, participants persisted longer. Use of scaffolds enabled the instructor to allow struggle without immediate help for participants. Collaborative reasoning was observed among the scaffolded participants whereas the participants without scaffolds often debugged alone. Regardless of scaffolds, participants often engaged in embodied debugging and also used trial and error. This study provides evidence that one can find success debugging even when engaging in trial and error. This implies that attempting to prevent trial and error may be counterproductive in some contexts. Rather, computer science educators may be advised to promote productive struggle.

     
    more » « less
  2. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less
  3. Capture the Flag (CTF) games improve learners’ engagement and diversify pedagogy for education and training. We design and build a novel CTF game that includes coordination and interaction between the (virtually participating) participants to build fellowship and facilitate networking. Our work builds on the existing CTF components with educational benefits but differs from the traditional CTF approach which presents either an individual game with no participant interaction or a team-based game where the members already know each other and have formed teams. More specifically, we incorporate real-time interactions between participants who are new to each other and engage the participants to collectively solve the CTF challenges. We apply our CTF in both a cybersecurity scholarship program and an academic conference. This paper describes and explains the design, implementation, execution, and validation of our CTF, particularly focusing on the novel goal of including coordination and interaction in order to build fellowships with the participants. We validate our CTF design and build using multiple channels, including the real-time data provided by logging during the session, post-CTF survey, and interviews from the beta-testing session. Our evaluation results show that our novel CTF focusing on coordination and interaction aids in building fellowship and a collaborative environment. We envision our CTF design to help with the rapport building and collaboration among participants in classroom/course settings, workshops, conferences, or technical training sessions. 
    more » « less
  4. Major challenges in engineering education include retention of undergraduate engineering students (UESs) and continued engagement after the first year when concepts increase in difficulty. Additionally, employers, as well as ABET, look for students to demonstrate non-technical skills, including the ability to work successfully in groups, the ability to communicate both within and outside their discipline, and the ability to find information that will help them solve problems and contribute to lifelong learning. Teacher education is also facing challenges given the recent incorporation of engineering practices and core ideas into the Next Generation Science Standards (NGSS) and state level standards of learning. To help teachers meet these standards in their classrooms, education courses for preservice teachers (PSTs) must provide resources and opportunities to increase science and engineering knowledge, and the associated pedagogies. To address these challenges, Ed+gineering, an NSF-funded multidisciplinary collaborative service learning project, was implemented into two sets of paired-classes in engineering and education: a 100 level mechanical engineering class (n = 42) and a foundations class in education (n = 17), and a fluid mechanics class in mechanical engineering technology (n = 23) and a science methods class (n = 15). The paired classes collaborated in multidisciplinary teams of 5-8 undergraduate students to plan and teach engineering lessons to local elementary school students. Teams completed a series of previously tested, scaffolded activities to guide their collaboration. Designing and delivering lessons engaged university students in collaborative processes that promoted social learning, including researching and planning, peer mentoring, teaching and receiving feedback, and reflecting and revising their engineering lesson. The research questions examined in this pilot, mixed-methods research study include: (1) How did PSTs’ Ed+gineering experiences influence their engineering and science knowledge?; (2) How did PSTs’ and UESs’ Ed+gineering experiences influence their pedagogical understanding?; and (3) What were PSTs’ and UESs’ overall perceptions of their Ed+gineering experiences? Both quantitative (e.g., Engineering Design Process assessment, Science Content Knowledge assessment) and qualitative (student reflections) data were used to assess knowledge gains and project perceptions following the semester-long intervention. Findings suggest that the PSTs were more aware and comfortable with the engineering field following lesson development and delivery, and often better able to explain particular science/engineering concepts. Both PSTs and UESs, but especially the latter, came to realize the importance of planning and preparing lessons to be taught to an audience. UESs reported greater appreciation for the work of educators. PSTs and UESs expressed how they learned to work in groups with multidisciplinary members—this is a valuable lesson for their respective professional careers. Yearly, the Ed+gineering research team will also request and review student retention reports in their respective programs to assess project impact. 
    more » « less
  5. We present an ethnographic study of secure software development processes in a software company using the anthropological research method of participant observation. Two PhD students in computer science trained in qualitative methods were embedded in a software company for 1.5 years of total research time. The researchers participated in everyday work activities such as coding and meetings, and observed software (in)security phenomena both through investigating historical data (code repositories and ticketing system records), and through pen-testing the developed software and observing developers’ and management’s reactions to the discovered vulnerabilities. Our study found that 1) security vulnerabilities are sometimes intentionally introduced and/or overlooked due to the difficulty in managing the various stakeholders’ responsibilities in an economic ecosystem, and cannot be simply blamed on developers’ lack of knowledge or skills; 2) accidental vulnerabilities discovered in the pen-testing process produce different reactions in the development team, often times contrary to what a security researcher would predict. These findings highlight the nuanced nature of the root causes of software vulnerabilities and indicate the need to take into account a significant amount of contextual information to understand how and why software vulnerabilities emerge during software development. Rather than simply addressing deficits in developer knowledge or practice, this research sheds light on at times forgotten human factors that significantly impact the security of software developed by actual companies. Our analysis also shows that improving software security in the development process can benefit from a co-creation model, where security experts work side by side with software developers to better identify security concerns and provide tools that are readily applicable within the specific context of the software development workflow. 
    more » « less