Background Team leadership during medical emergencies like cardiac arrest resuscitation is cognitively demanding, especially for trainees. These cognitive processes remain poorly characterized due to measurement challenges. Using virtual reality simulation, this study aimed to elucidate and compare communication and cognitive processes-such as decision-making, cognitive load, perceived pitfalls, and strategies-between expert and novice code team leaders to inform strategies for accelerating proficiency development. Methods A simulation-based mixed methods approach was utilized within a single large academic medical center, involving twelve standardized virtual reality cardiac arrest simulations. These 10- to 15-minutes simulation sessions were performed by seven experts and five novices. Following the simulations, a cognitive task analysis was conducted using a cued-recall protocol to identify the challenges, decision-making processes, and cognitive load experienced across the seven stages of each simulation. Results The analysis revealed 250 unique cognitive processes. In terms of reasoning patterns, experts used inductive reasoning, while novices tended to use deductive reasoning, considering treatments before assessments. Experts also demonstrated earlier consideration of potential reversible causes of cardiac arrest. Regarding team communication, experts reported more critical communications, with no shared subthemes between groups. Experts identified more teamwork pitfalls, and suggested more strategies compared to novices. For cognitive load, experts reported lower median cognitive load (53) compared to novices (80) across all stages, with the exception of the initial presentation phase. Conclusions The identified patterns of expert performance — superior teamwork skills, inductive clinical reasoning, and distributed cognitive strategiesn — can inform training programs aimed at accelerating expertise development. 
                        more » 
                        « less   
                    
                            
                            Examples of Unsuccessful Use of Code Comprehension Strategies: A Resource for Developing Code Comprehension Pedagogy
                        
                    
    
            Background: Code comprehension research has identified gaps between the strategies experts and novices use in comprehending code. In computer science (CS) education, code comprehension has recently received increased attention, and research has identified correlations between code comprehension and code writing. While there is a long history of identifying expert code-comprehension strategies, there has been less work to understand and support the incremental development of code comprehension expertise. Purpose: The goal of the paper is to identify potential code-comprehension strategies that educators could teach students. Methods: In this paper, I analyze and present examples from a novice programmer engaged in a code-comprehension task. Findings: I identify five code-comprehension strategies that overlap with previously identified, expert code-comprehension strategies. While an expert would use these strategies to produce correct inferences about code, I primarily examine a novice’s unsuccessful attempts to comprehend code using these strategies. Implications: My case study provides an existence proof that shows that these five strategies can be used by a novice. This is essential for identifying potential strategies to teach novices. My primary empirical contribution is identifying potential building blocks for developing code-comprehension expertise. My primary theoretical contribution is proposing to build code-comprehension pedagogy on specific, expert strategies that I show are usable by a novice. More broadly, I hope to encourage CS education researchers to focus on understanding the complex processes of learning that occur in between the end points of novice and expert. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2144249
- PAR ID:
- 10428889
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9781450399760
- Format(s):
- Medium: X
- Location:
- Chicago IL USA
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Background: Previous work has shown that students can understand more complicated pieces of code through the use of common software development tools (code execution, debuggers) than they can without them. Objectives: Given that tools can enable novice programmers to understand more complex code, we believe that students should be explicitly taught to do so, to facilitate their plan acquisition and development as independent programmers. In order to do so, this paper seeks to understand: (1) the relative utility of these tools, (2) the thought process students use to choose a tool, and (3) the degree to which students can choose an appropriate tool to understand a given piece of code. Method: We used a mixed-methods approach. To explore the relative effectiveness of the tools, we used a randomized control trial study (𝑁 = 421) to observe student performance with each tool in understanding a range of different code snippets. To explore tool selection, we used a series of think-aloud interviews (𝑁 = 18) where students were presented with a range of code snippets to understand and were allowed to choose which tool they wanted to use. Findings: Overall, novices were more often successful comprehending code when provided with access to code execution, perhaps because it was easier to test a larger set of inputs than the debugger. As code complexity increased (as indicated by cyclomatic complexity), students become more successful with the debugger. We found that novices preferred code execution for simpler or familiar code, to quickly verify their understanding and used the debugger on more complex or unfamiliar code or when they were confused about a small subset of the code. High-performing novices were adept at switching between tools, alternating from a detail-oriented to a broader perspective of the code and vice versa, when necessary. Novices who were unsuccessful tended to be overconfident in their incorrect understanding or did not display a willingness to double check their answers using a debugger. Implications: We can likely teach novices to independently understand code they do not recognize by utilizing code execution and debuggers. Instructors should teach students to recognize when code is complex (e.g., large number of nested loops present), and to carefully step through these loops using debuggers. We should additionally teach students to be cautious to double check their understanding of the code and to self-assess whether they are familiar with the code. They can also be encouraged to strategically switch between execution and debuggers to manage cognitive load, thus maximizing their problem-solving capabilities.more » « less
- 
            null (Ed.)Program comprehension is a vital skill in software development. This work investigates program comprehension by examining the eye movement of novice programmers as they gain programming experience over the duration of a Java course. Their eye movement behavior is compared to the eye movement of expert programmers. Eye movement studies of natural text show that word frequency and length influence eye movement duration and act as indicators of reading skill. The study uses an existing longitudinal eye tracking dataset with 20 novice and experienced readers of source code. The work investigates the acquisition of the effects of token frequency and token length in source code reading as an indication of program reading skill. The results show evidence of the frequency and length effects in reading source code and the acquisition of these effects by novices. These results are then leveraged in a machine learning model demonstrating how eye movement can be used to estimate programming proficiency and classify novices from experts with 72% accuracy.more » « less
- 
            Since intermediate CS students can use a variety of control structures, why do their choices often not match experts' Students may not realize what choices expert prefer, find non-expert choices easier to read, or simply forget to write with expert structure. To disentangle these explanations, we surveyed 328 2nd and 3rd semester undergraduates, with tasks including writing short functions, selecting which structure was most readable or best styled, and comprehension questions. Questions focused on seven control structure topics that were important to instructors (e.g., factoring out repeated code between an if-block and its else). Students frequently wrote with non-expert structure, and, for five topics, at least 1/3 of students (48% - 71%) thought a non-expert structure was more readable than the expert one. However, students often made one choice when writing code, but preferred a different choice when reading it. Additionally, for more complex topics, students often failed to notice (or understand) differences in execution caused by changes in structure. Together, these results suggest that instruction and practice for choosing control structures should be context-specific, and that assessment focused only on code writing may miss underlying misunderstandings.more » « less
- 
            The rising frequency of natural disasters demands efficient and accurate structural damage assessments to ensure public safety and expedite recovery. Human error, inconsistent standards, and safety risks limit traditional visual inspections by engineers. Although UAVs and AI have advanced post-disaster assessments, they still lack the expert knowledge and decision-making judgment of human inspectors. This study explores how expertise shapes human–building interaction during disaster inspections by using eye tracking technology to capture the gaze patterns of expert and novice inspectors. A controlled, screen-based inspection method was employed to safely gather data, which was then used to train a machine learning model for saliency map prediction. The results highlight significant differences in visual attention between experts and novices, providing valuable insights for future inspection strategies and training novice inspectors. By integrating human expertise with automated systems, this research aims to improve the accuracy and reliability of post-disaster structural assessments, fostering more effective human–machine collaboration in disaster response efforts.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    