skip to main content

Title: Live Coding: A Review of the Literature
One of the goals of computing education research is to document the potential strengths and weaknesses of contemporary teaching methods in computing. Live coding has recently gained attention as one of the best practices for teaching programming. To offer a more comprehensive understanding of the existing body of research about live coding, we reviewed papers in computing education research that investigated the value of live coding in an educational setting. We categorized each paper based on (1) how it defines live coding, (2) whether its version of live coding could be considered active learning, (3) the type of study conducted, (4) types of data collected and the data analysis methods used, (5) evidence provided for the effectiveness of live coding, (6) reported benefits and drawbacks of live coding, and (7) reported theoretical frameworks used to explain the basis, effects or goals of live coding. We found that although live coding has been recommended as one of the best practices for teaching programming, there is a lack of empirical evidence to support claims about the effectiveness of live coding on student learning. Finally, we discuss the implications of our findings and suggest future research directions that could develop a more holistic more » understanding of this pedagogical technique. « less
Authors:
; ; ;
Award ID(s):
2044473
Publication Date:
NSF-PAR ID:
10313400
Journal Name:
26th ACM Conference on Innovation and Technology in Computer Science Education
Sponsoring Org:
National Science Foundation
More Like this
  1. Need/Motivation (e.g., goals, gaps in knowledge) The ESTEEM implemented a STEM building capacity project through students’ early access to a sustainable and innovative STEM Stepping Stones, called Micro-Internships (MI). The goal is to reap key benefits of a full-length internship and undergraduate research experiences in an abbreviated format, including access, success, degree completion, transfer, and recruiting and retaining more Latinx and underrepresented students into the STEM workforce. The MIs are designed with the goals to provide opportunities for students at a community college and HSI, with authentic STEM research and applied learning experiences (ALE), support for appropriate STEM pathway/career, preparation and confidence to succeed in STEM and engage in summer long REUs, and with improved outcomes. The MI projects are accessible early to more students and build momentum to better overcome critical obstacles to success. The MIs are shorter, flexibly scheduled throughout the year, easily accessible, and participation in multiple MI is encouraged. ESTEEM also establishes a sustainable and collaborative model, working with partners from BSCS Science Education, for MI’s mentor, training, compliance, and building capacity, with shared values and practices to maximize the improvement of student outcomes. New Knowledge (e.g., hypothesis, research questions) Research indicates that REU/internship experiences canmore »be particularly powerful for students from Latinx and underrepresented groups in STEM. However, those experiences are difficult to access for many HSI-community college students (85% of our students hold off-campus jobs), and lack of confidence is a barrier for a majority of our students. The gap between those who can and those who cannot is the “internship access gap.” This project is at a central California Community College (CCC) and HSI, the only affordable post-secondary option in a region serving a historically underrepresented population in STEM, including 75% Hispanic, and 87% have not completed college. MI is designed to reduce inequalities inherent in the internship paradigm by providing access to professional and research skills for those underserved students. The MI has been designed to reduce barriers by offering: shorter duration (25 contact hours); flexible timing (one week to once a week over many weeks); open access/large group; and proximal location (on-campus). MI mentors participate in week-long summer workshops and ongoing monthly community of practice with the goal of co-constructing a shared vision, engaging in conversations about pedagogy and learning, and sustaining the MI program going forward. Approach (e.g., objectives/specific aims, research methodologies, and analysis) Research Question and Methodology: We want to know: How does participation in a micro-internship affect students’ interest and confidence to pursue STEM? We used a mixed-methods design triangulating quantitative Likert-style survey data with interpretive coding of open-responses to reveal themes in students’ motivations, attitudes toward STEM, and confidence. Participants: The study sampled students enrolled either part-time or full-time at the community college. Although each MI was classified within STEM, they were open to any interested student in any major. Demographically, participants self-identified as 70% Hispanic/Latinx, 13% Mixed-Race, and 42 female. Instrument: Student surveys were developed from two previously validated instruments that examine the impact of the MI intervention on student interest in STEM careers and pursuing internships/REUs. Also, the pre- and post (every e months to assess longitudinal outcomes) -surveys included relevant open response prompts. The surveys collected students’ demographics; interest, confidence, and motivation in pursuing a career in STEM; perceived obstacles; and past experiences with internships and MIs. 171 students responded to the pre-survey at the time of submission. Outcomes (e.g., preliminary findings, accomplishments to date) Because we just finished year 1, we lack at this time longitudinal data to reveal if student confidence is maintained over time and whether or not students are more likely to (i) enroll in more internships, (ii) transfer to a four-year university, or (iii) shorten the time it takes for degree attainment. For short term outcomes, students significantly Increased their confidence to continue pursuing opportunities to develop within the STEM pipeline, including full-length internships, completing STEM degrees, and applying for jobs in STEM. For example, using a 2-tailed t-test we compared means before and after the MI experience. 15 out of 16 questions that showed improvement in scores were related to student confidence to pursue STEM or perceived enjoyment of a STEM career. Finding from the free-response questions, showed that the majority of students reported enrolling in the MI to gain knowledge and experience. After the MI, 66% of students reported having gained valuable knowledge and experience, and 35% of students spoke about gaining confidence and/or momentum to pursue STEM as a career. Broader Impacts (e.g., the participation of underrepresented minorities in STEM; development of a diverse STEM workforce, enhanced infrastructure for research and education) The ESTEEM project has the potential for a transformational impact on STEM undergraduate education’s access and success for underrepresented and Latinx community college students, as well as for STEM capacity building at Hartnell College, a CCC and HSI, for students, faculty, professionals, and processes that foster research in STEM and education. Through sharing and transfer abilities of the ESTEEM model to similar institutions, the project has the potential to change the way students are served at an early and critical stage of their higher education experience at CCC, where one in every five community college student in the nation attends a CCC, over 67% of CCC students identify themselves with ethnic backgrounds that are not White, and 40 to 50% of University of California and California State University graduates in STEM started at a CCC, thus making it a key leverage point for recruiting and retaining a more diverse STEM workforce.« less
  2. Abstract There is growing consensus that teaching computer ethics is important, but there is little consensus on how to do so. One unmet challenge is increasing the capacity of computing students to make decisions about the ethical challenges embedded in their technical work. This paper reports on the design, testing, and evaluation of an educational simulation to meet this challenge. The privacy by design simulation enables more relevant and effective computer ethics education by letting students experience and make decisions about common ethical challenges encountered in real-world work environments. This paper describes the process of incorporating empirical observations of ethical questions in computing into an online simulation and an in-person board game. We employed the Values at Play framework to transform empirical observations of design into a playable educational experience. First, we conducted qualitative research to discover when and how values levers—practices that encourage values discussions during technology development—occur during the design of new mobile applications. We then translated these findings into gameplay elements, including the goals, roles, and elements of surprise incorporated into a simulation. We ran the online simulation in five undergraduate computer and information science classes. Based on this experience, we created a more accessible board game,more »which we tested in two undergraduate classes and two professional workshops. We evaluated the effectiveness of both the online simulation and the board game using two methods: a pre/post-test of moral sensitivity based on the Defining Issues Test, and a questionnaire evaluating student experience. We found that converting real-world ethical challenges into a playable simulation increased student’s reported interest in ethical issues in technology, and that students identified the role-playing activity as relevant to their technical coursework. This demonstrates that roleplaying can emphasize ethical decision-making as a relevant component of technical work.« less
  3. The Next Generation Science Standards [1] recognized evidence-based argumentation as one of the essential skills for students to develop throughout their science and engineering education. Argumentation focuses students on the need for quality evidence, which helps to develop their deep understanding of content [2]. Argumentation has been studied extensively, both in mathematics and science education but also to some extent in engineering education (see for example [3], [4], [5], [6]). After a thorough search of the literature, we found few studies that have considered how teachers support collective argumentation during engineering learning activities. The purpose of this program of research was to support teachers in viewing argumentation as an important way to promote critical thinking and to provide teachers with tools to implement argumentation in their lessons integrating coding into science, technology, engineering, and mathematics (which we refer to as integrative STEM). We applied a framework developed for secondary mathematics [7] to understand how teachers support collective argumentation in integrative STEM lessons. This framework used Toulmin’s [8] conceptualization of argumentation, which includes three core components of arguments: a claim (or hypothesis) that is based on data (or evidence) accompanied by a warrant (or reasoning) that relates the data to themore »claim [9], [8]. To adapt the framework, video data were coded using previously established methods for analyzing argumentation [7]. In this paper, we consider how the framework can be applied to an elementary school teacher’s classroom interactions and present examples of how the teacher implements various questioning strategies to facilitate more productive argumentation and deeper student engagement. We aim to understand the nature of the teacher’s support for argumentation—contributions and actions from the teacher that prompt or respond to parts of arguments. In particular, we look at examples of how the teacher supports students to move beyond unstructured tinkering (e.g., trial-and-error) to think logically about coding and develop reasoning for the choices that they make in programming. We also look at the components of arguments that students provide, with and without teacher support. Through the use of the framework, we are able to articulate important aspects of collective argumentation that would otherwise be in the background. The framework gives both eyes to see and language to describe how teachers support collective argumentation in integrative STEM classrooms.« less
  4. To meet the rising demand for computer science (CS) courses, K-12 educators need to be prepared to teach introductory concepts and skills in courses such as Computer Science Principles (CSP), which takes a breadth-first approach to CS and includes topics beyond programming such as data, impacts of computing, and networks. Educators are now also being asked to teach more advanced concepts in courses such as the College Board's Advanced Placement Computer Science A (CSA) course, which focuses on advanced programming using Java and includes topics such as objects, inheritance, arrays, and recursion. Traditional CSA curricula have not used content or pedagogy designed to engage a broad range of learners and support their success. Unlike CSP, which is attracting more underrepresented students to computing as it was designed, CSA continues to enroll mostly male, white, and Asian students [College Board 2019, Ericson 2020, Sax 2020]. In order to expand CS education opportunities, it is crucial that students have an engaging experience in CSA similar to CSP. Well-designed differentiated professional development (PD) that focuses on content and pedagogy is necessary to meet individual teacher needs, to successfully build teacher skills and confidence to teach CSA, and to improve engagement with students [Darling-Hammondmore »2017]. It is critical that as more CS opportunities and courses are developed, teachers remain engaged with their own learning in order to build their content knowledge and refine their teaching practice [CSTA 2020]. CSAwesome, developed and piloted in 2019, offers a College Board endorsed AP CSA curriculum and PD focused on supporting the transition of teachers and students from CSP to CSA. This poster presents preliminary findings aimed at exploring the supports and challenges new-to-CSA high school level educators face when transitioning from teaching an introductory, breadth-first course such as CSP to teaching the more challenging, programming-focused CSA course. Five teachers who completed the online CSAwesome summer 2020 PD completed interviews in spring 2021. The project employed an inductive coding scheme to analyze interview transcriptions and qualitative notes from teachers about their experiences learning, teaching, and implementing CSP and CSA curricula. Initial findings suggest that teachers’ experience in the CSAwesome PD may improve their confidence in teaching CSA, ability to effectively use inclusive teaching practices, ability to empathize with their students, problem-solving skills, and motivation to persist when faced with challenges and difficulties. Teachers noted how the CSAwesome PD provided them with a student perspective and increased feelings of empathy. Participants spoke about the implications of the COVID-19 pandemic on their own learning, student learning, and teaching style. Teachers enter the PD with many different backgrounds, CS experience levels, and strengths, however, new-to-CSA teachers require further PD on content and pedagogy to transition between CSP and CSA. Initial results suggest that the CSAwesome PD may have an impact on long-term teacher development as new-to-CSA teachers who participated indicated a positive impact on their teaching practices, ideologies, and pedagogies.« less
  5. The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert datamore »scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge.« less