skip to main content

Title: A Software Debugger for E-textiles and Arduino Microcontrollers
Today’s STEM classrooms have expanded the domain of computer science education from a basic two-toned terminal screen to now include helpful Integrated Development Environments(IDE) (BlueJ, Eclipse), block-based programming (MIT Scratch, Greenfoot), and even physical computing with embedded systems (Arduino, LEGO Mindstorm). But no matter which environment a student starts programming in, all students will eventually need help in finding and fixing bugs in their code. While the helpful IDE’s have debugger tools built in (breakpoints for pausing your program, ways to view/modify variable values, and "stepping" through code execution), in many of the other programming environments, students are limited to using print statements to try and "see" what is happening inside their program. Most students who learn to write code for Arduino microcontrollers will start within the Arduino IDE, but the official Arduino IDE does not currently provide any debugging tools. Instead, a student would have to move on to a professional IDE such as Atmel Studio or acquire a hardware debugger in order to add breakpoints or view their program’s variables. But each of these options has a steep learning curve, additional costs, and can require complex configurations. Based on research of student debugging practices[3, 7] and our own classroom observations, we have developed an Arduino software library, called Arduino Debugger, which provides some of these debugging tools (ex. breakpoints) while staying more » within the official Arduino IDE. This work continues a previous library, (redacted), which focused on features specific to e-textiles development boards. The Arduino Debugger library has been modified to support not only e-textile boards (Lilypad, Adafruit Circuit Playground) but most AVR and ARM based Arduino boards.We are also in the process of testing a set of Debugging Code Templates to see how they might increase student adoption of debugging tools. « less
Authors:
; ; ; ;
Award ID(s):
1742081
Publication Date:
NSF-PAR ID:
10163651
Journal Name:
FabLearn 2020 - 9th Annual Conference on Maker Education (FabLearn ’20)
Sponsoring Org:
National Science Foundation
More Like this
  1. When learning to code a student must learn both to create a program and then how to debug said program. Novices often start with print statements to help trace code execution and isolate logical errors. Eventually, they adopt advance debugger practices such as breakpoints, "stepping" through code execution, and "watching" variables as their values are updated. Unfortunately for students working with Arduino devices, there are no debugger tools built into the Arduino IDE. Instead, a student would have to move onto a professional IDE like Atmel Studio and/or acquire a hardware debugger. Except, these options have a steep learning curvemore »and are not intended for a student who has just started to learn how to write code. I am developing an Arduino software library, called Pin Status, to assist novice programmers with debugging common logic errors and provide features specific to the e-textile microcontroller, Adafruit Circuit Playground Classic.« less
  2. Comprehending programs is key to learning programming. Previous studies highlight novices’ naive approaches to comprehend ing the structural, functional, and behavioral aspects of programs. And yet, with the majority of them examining on-screen program ming environments, we barely know about program comprehension within physical computing—a common K-12 programming context. In this study, we qualitatively analyzed think-aloud inter view videos of 22 high school students individually comprehending a given text-based Arduino program while interacting with its corresponding functional physical artifact to answer two questions: 1) How do novices comprehend the given text-based Arduino pro gram? And, 2) What role does themore »physical artifact play in program comprehension? We found that novices mostly approached the program bottom-up, initially comprehending structural and later functional aspects, along different granularities. The artifact provided two distinct modes of engagement, active and interactive, that supported the program’s structural and functional comprehension. However, behavioral comprehension i.e. understanding program execution leading to the observed outcome was inaccessible to many. Our findings extend program comprehension literature in two ways: (a) it provides one of the very few accounts of high school students’ code comprehension in a physical computing con text, and, (b) it highlights the mediating role of physical artifacts in program comprehension. Further, they point directions for future pedagogical and tool designs within physical computing to better support students’ distributed program comprehension.« less
  3. Novice programmers often struggle with code understanding and debugging. Live Programming environments visualize the runtime values of a program each time it is modified to provide immediate feedback, which help with tracing the program execution. This paper presents the use of a Live Programming tool in a CS1 course to better understand the impact of Live Programming on novices’ learning metrics and their perceptions of the tool. We conducted a within-subjects study at a large public university in a CS1 course in Python (N=237) where students completed tasks in a lab setting, in some cases with a Live Programming environment,more »and in some cases without. Through post-lab surveys and open-ended feedback, we measured how well students understood the material and how students perceived the programming environment. To understand the impact of Live Programming, we compared the collected data for students who used Live Programming with the data for students who did not. We found that while learning outcomes were the same regardless of whether Live Programming was used or not, students who used the Live Programming tool completed some code tracing tasks faster. Furthermore, students liked the Live Programming environment more, and rated it as more helpful for their learning.« less
  4. In September 2019, the fourth and final workshop on the Future of Mechatronics and Robotics Education (FoMRE) was held at a Lawrence Technological University in Southfield, MI. This workshop was organized by faculty at several universities with financial support from industry partners and the National Science Foundation. The purpose of the workshops was to create a cohesive effort among mechatronics and robotics courses, minors and degree programs. Mechatronics and Robotics Engineering (MRE) is an integration of mechanics, controls, electronics, and software, which provides a unique opportunity for engineering students to function on multidisciplinary teams. Due to its multidisciplinary nature, itmore »attracts diverse and innovative students, and graduates better-prepared professional engineers. In this fast growing field, there is a great need to standardize educational material and make MRE education more widely available and easier to adopt. This can only be accomplished if the community comes together to speak with one clear voice about not only the benefits, but also the best ways to teach it. These efforts would also aid in establishing more of these degree programs and integrating minors or majors into existing computer science, mechanical engineering, or electrical engineering departments. The final workshop was attended by approximately 50 practitioners from industry and academia. Participants identified many practical skills required for students to succeed in an MRE curriculum and as practicing engineers after graduation. These skills were then organized into the following categories: professional, independent learning, controller design, numerical simulation and analysis, electronics, software development, and system design. For example, professional skills include technical reports, presentations, and documentation. Independent learning includes reading data sheets, performing internet searches, doing a literature review, and having a maker mindset. Numerical simulation skills include understanding data, presenting data graphically, solving and simulating in software such as MATLAB, Simulink and Excel. Controller design involves selecting a controller, tuning a controller, designing to meet specifications, and understanding when the results are good enough. Electronics skills include selecting sensors, interfacing sensors, interfacing actuators, creating printed circuit boards, wiring on a breadboard, soldering, installing drivers, using integrated circuits, and using microcontrollers. Software development of embedded systems includes agile program design, state machines, analyzing and evaluating code results, commenting code, troubleshooting, debugging, AI and machine learning. Finally, system design includes prototyping, creating CAD models, design for manufacturing, breaking a system down into subsystems, integrating and interfacing subcomponents, having a multidisciplinary perspective, robustness, evaluating tradeoffs, testing, validation, and verification, failure, effect, and mode analysis. A survey was prepared and sent out to the participants from all four workshops as well as other robotics faculty, researchers and industry personnel in order to elicit a broader community response. Because one of the biggest challenges in mechatronics and robotics education is the absence of standardized curricula, textbooks, platforms, syllabi, assignments, and learning outcomes, this was a vital part of the process to achieve some level of consensus. This paper presents an introduction to MRE education, related work on existing programs, methods, results of the practical skills survey, and then draws conclusions based upon these results. It aims to create the foundation for standardizing the development of student skills in mechatronics and robotics curricula across institutions, disciplines, majors and minors. The survey was completed by 94 participants and it was clear that there is a consensus that the primary skills students should have upon completion of MRE courses or a program is a broader multidisciplinary systems-level perspective, an ability to problem solve, and an ability to design a system to meet specifications.« less
  5. Obeid, Iyad Selesnick (Ed.)
    Electroencephalography (EEG) is a popular clinical monitoring tool used for diagnosing brain-related disorders such as epilepsy [1]. As monitoring EEGs in a critical-care setting is an expensive and tedious task, there is a great interest in developing real-time EEG monitoring tools to improve patient care quality and efficiency [2]. However, clinicians require automatic seizure detection tools that provide decisions with at least 75% sensitivity and less than 1 false alarm (FA) per 24 hours [3]. Some commercial tools recently claim to reach such performance levels, including the Olympic Brainz Monitor [4] and Persyst 14 [5]. In this abstract, we describemore »our efforts to transform a high-performance offline seizure detection system [3] into a low latency real-time or online seizure detection system. An overview of the system is shown in Figure 1. The main difference between an online versus offline system is that an online system should always be causal and has minimum latency which is often defined by domain experts. The offline system, shown in Figure 2, uses two phases of deep learning models with postprocessing [3]. The channel-based long short term memory (LSTM) model (Phase 1 or P1) processes linear frequency cepstral coefficients (LFCC) [6] features from each EEG channel separately. We use the hypotheses generated by the P1 model and create additional features that carry information about the detected events and their confidence. The P2 model uses these additional features and the LFCC features to learn the temporal and spatial aspects of the EEG signals using a hybrid convolutional neural network (CNN) and LSTM model. Finally, Phase 3 aggregates the results from both P1 and P2 before applying a final postprocessing step. The online system implements Phase 1 by taking advantage of the Linux piping mechanism, multithreading techniques, and multi-core processors. To convert Phase 1 into an online system, we divide the system into five major modules: signal preprocessor, feature extractor, event decoder, postprocessor, and visualizer. The system reads 0.1-second frames from each EEG channel and sends them to the feature extractor and the visualizer. The feature extractor generates LFCC features in real time from the streaming EEG signal. Next, the system computes seizure and background probabilities using a channel-based LSTM model and applies a postprocessor to aggregate the detected events across channels. The system then displays the EEG signal and the decisions simultaneously using a visualization module. The online system uses C++, Python, TensorFlow, and PyQtGraph in its implementation. The online system accepts streamed EEG data sampled at 250 Hz as input. The system begins processing the EEG signal by applying a TCP montage [8]. Depending on the type of the montage, the EEG signal can have either 22 or 20 channels. To enable the online operation, we send 0.1-second (25 samples) length frames from each channel of the streamed EEG signal to the feature extractor and the visualizer. Feature extraction is performed sequentially on each channel. The signal preprocessor writes the sample frames into two streams to facilitate these modules. In the first stream, the feature extractor receives the signals using stdin. In parallel, as a second stream, the visualizer shares a user-defined file with the signal preprocessor. This user-defined file holds raw signal information as a buffer for the visualizer. The signal preprocessor writes into the file while the visualizer reads from it. Reading and writing into the same file poses a challenge. The visualizer can start reading while the signal preprocessor is writing into it. To resolve this issue, we utilize a file locking mechanism in the signal preprocessor and visualizer. Each of the processes temporarily locks the file, performs its operation, releases the lock, and tries to obtain the lock after a waiting period. The file locking mechanism ensures that only one process can access the file by prohibiting other processes from reading or writing while one process is modifying the file [9]. The feature extractor uses circular buffers to save 0.3 seconds or 75 samples from each channel for extracting 0.2-second or 50-sample long center-aligned windows. The module generates 8 absolute LFCC features where the zeroth cepstral coefficient is replaced by a temporal domain energy term. For extracting the rest of the features, three pipelines are used. The differential energy feature is calculated in a 0.9-second absolute feature window with a frame size of 0.1 seconds. The difference between the maximum and minimum temporal energy terms is calculated in this range. Then, the first derivative or the delta features are calculated using another 0.9-second window. Finally, the second derivative or delta-delta features are calculated using a 0.3-second window [6]. The differential energy for the delta-delta features is not included. In total, we extract 26 features from the raw sample windows which add 1.1 seconds of delay to the system. We used the Temple University Hospital Seizure Database (TUSZ) v1.2.1 for developing the online system [10]. The statistics for this dataset are shown in Table 1. A channel-based LSTM model was trained using the features derived from the train set using the online feature extractor module. A window-based normalization technique was applied to those features. In the offline model, we scale features by normalizing using the maximum absolute value of a channel [11] before applying a sliding window approach. Since the online system has access to a limited amount of data, we normalize based on the observed window. The model uses the feature vectors with a frame size of 1 second and a window size of 7 seconds. We evaluated the model using the offline P1 postprocessor to determine the efficacy of the delayed features and the window-based normalization technique. As shown by the results of experiments 1 and 4 in Table 2, these changes give us a comparable performance to the offline model. The online event decoder module utilizes this trained model for computing probabilities for the seizure and background classes. These posteriors are then postprocessed to remove spurious detections. The online postprocessor receives and saves 8 seconds of class posteriors in a buffer for further processing. It applies multiple heuristic filters (e.g., probability threshold) to make an overall decision by combining events across the channels. These filters evaluate the average confidence, the duration of a seizure, and the channels where the seizures were observed. The postprocessor delivers the label and confidence to the visualizer. The visualizer starts to display the signal as soon as it gets access to the signal file, as shown in Figure 1 using the “Signal File” and “Visualizer” blocks. Once the visualizer receives the label and confidence for the latest epoch from the postprocessor, it overlays the decision and color codes that epoch. The visualizer uses red for seizure with the label SEIZ and green for the background class with the label BCKG. Once the streaming finishes, the system saves three files: a signal file in which the sample frames are saved in the order they were streamed, a time segmented event (TSE) file with the overall decisions and confidences, and a hypotheses (HYP) file that saves the label and confidence for each epoch. The user can plot the signal and decisions using the signal and HYP files with only the visualizer by enabling appropriate options. For comparing the performance of different stages of development, we used the test set of TUSZ v1.2.1 database. It contains 1015 EEG records of varying duration. The any-overlap performance [12] of the overall system shown in Figure 2 is 40.29% sensitivity with 5.77 FAs per 24 hours. For comparison, the previous state-of-the-art model developed on this database performed at 30.71% sensitivity with 6.77 FAs per 24 hours [3]. The individual performances of the deep learning phases are as follows: Phase 1’s (P1) performance is 39.46% sensitivity and 11.62 FAs per 24 hours, and Phase 2 detects seizures with 41.16% sensitivity and 11.69 FAs per 24 hours. We trained an LSTM model with the delayed features and the window-based normalization technique for developing the online system. Using the offline decoder and postprocessor, the model performed at 36.23% sensitivity with 9.52 FAs per 24 hours. The trained model was then evaluated with the online modules. The current performance of the overall online system is 45.80% sensitivity with 28.14 FAs per 24 hours. Table 2 summarizes the performances of these systems. The performance of the online system deviates from the offline P1 model because the online postprocessor fails to combine the events as the seizure probability fluctuates during an event. The modules in the online system add a total of 11.1 seconds of delay for processing each second of the data, as shown in Figure 3. In practice, we also count the time for loading the model and starting the visualizer block. When we consider these facts, the system consumes 15 seconds to display the first hypothesis. The system detects seizure onsets with an average latency of 15 seconds. Implementing an automatic seizure detection model in real time is not trivial. We used a variety of techniques such as the file locking mechanism, multithreading, circular buffers, real-time event decoding, and signal-decision plotting to realize the system. A video demonstrating the system is available at: https://www.isip.piconepress.com/projects/nsf_pfi_tt/resources/videos/realtime_eeg_analysis/v2.5.1/video_2.5.1.mp4. The final conference submission will include a more detailed analysis of the online performance of each module. ACKNOWLEDGMENTS Research reported in this publication was most recently supported by the National Science Foundation Partnership for Innovation award number IIP-1827565 and the Pennsylvania Commonwealth Universal Research Enhancement Program (PA CURE). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the official views of any of these organizations. REFERENCES [1] A. Craik, Y. He, and J. L. Contreras-Vidal, “Deep learning for electroencephalogram (EEG) classification tasks: a review,” J. Neural Eng., vol. 16, no. 3, p. 031001, 2019. https://doi.org/10.1088/1741-2552/ab0ab5. [2] A. C. Bridi, T. Q. Louro, and R. C. L. Da Silva, “Clinical Alarms in intensive care: implications of alarm fatigue for the safety of patients,” Rev. Lat. Am. Enfermagem, vol. 22, no. 6, p. 1034, 2014. https://doi.org/10.1590/0104-1169.3488.2513. [3] M. Golmohammadi, V. Shah, I. Obeid, and J. Picone, “Deep Learning Approaches for Automatic Seizure Detection from Scalp Electroencephalograms,” in Signal Processing in Medicine and Biology: Emerging Trends in Research and Applications, 1st ed., I. Obeid, I. Selesnick, and J. Picone, Eds. New York, New York, USA: Springer, 2020, pp. 233–274. https://doi.org/10.1007/978-3-030-36844-9_8. [4] “CFM Olympic Brainz Monitor.” [Online]. Available: https://newborncare.natus.com/products-services/newborn-care-products/newborn-brain-injury/cfm-olympic-brainz-monitor. [Accessed: 17-Jul-2020]. [5] M. L. Scheuer, S. B. Wilson, A. Antony, G. Ghearing, A. Urban, and A. I. Bagic, “Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset,” J. Clin. Neurophysiol., 2020. https://doi.org/10.1097/WNP.0000000000000709. [6] A. Harati, M. Golmohammadi, S. Lopez, I. Obeid, and J. Picone, “Improved EEG Event Classification Using Differential Energy,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium, 2015, pp. 1–4. https://doi.org/10.1109/SPMB.2015.7405421. [7] V. Shah, C. Campbell, I. Obeid, and J. Picone, “Improved Spatio-Temporal Modeling in Automated Seizure Detection using Channel-Dependent Posteriors,” Neurocomputing, 2021. [8] W. Tatum, A. Husain, S. Benbadis, and P. Kaplan, Handbook of EEG Interpretation. New York City, New York, USA: Demos Medical Publishing, 2007. [9] D. P. Bovet and C. Marco, Understanding the Linux Kernel, 3rd ed. O’Reilly Media, Inc., 2005. https://www.oreilly.com/library/view/understanding-the-linux/0596005652/. [10] V. Shah et al., “The Temple University Hospital Seizure Detection Corpus,” Front. Neuroinform., vol. 12, pp. 1–6, 2018. https://doi.org/10.3389/fninf.2018.00083. [11] F. Pedregosa et al., “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res., vol. 12, pp. 2825–2830, 2011. https://dl.acm.org/doi/10.5555/1953048.2078195. [12] J. Gotman, D. Flanagan, J. Zhang, and B. Rosenblatt, “Automatic seizure detection in the newborn: Methods and initial evaluation,” Electroencephalogr. Clin. Neurophysiol., vol. 103, no. 3, pp. 356–362, 1997. https://doi.org/10.1016/S0013-4694(97)00003-9.« less