<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>A Visual Unplugged Activity to Introduce PDC</title></titleStmt>
			<publicationStmt>
				<publisher>IEEE</publisher>
				<date>06/03/2025</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10630248</idno>
					<idno type="doi">10.1109/IPDPSW66978.2025.00102</idno>
					
					<author>Mary Smith</author><author>Srishti Srivastava</author><author>David P Bunde</author><author>April Crockett</author><author>Michael Gerten</author><author>Peter Maher</author><author>Jaime Spacco</author><author>Xiaoyuan Suo</author><author>Jiayin Wang</author><author>Michelle Zhu</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[We introduce an unplugged activity designed for CS1 students to explore fundamental parallel computing concepts. The activity requires only gridded paper and basic coloring tools, such as pens, markers, crayons, or colored pencils. It was piloted in CS1 courses across six universities, where faculty successfully incorporated the activity into various CS1 curricula taught in different programming languages. Learning outcomes were assessed through surveys and examination of student work product. Student engagement was measured using a survey that evaluated participants’ perceptions of engagement (enjoyment, participation, and focus), understanding (comprehension of the material and computing concepts), and instructor effectiveness (preparedness, enthusiasm, and availability). Qualitative student feedback was favorable, and survey results suggest the activity effectively introduced parallel and distributed computing concepts.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION</head><p>The importance of teaching parallel computing to undergraduate students has also been established in existing literature <ref type="bibr">[7]</ref>. The ACM/IEEE-CS 2013 curriculum report <ref type="bibr">[17]</ref> and the NSF/IEEE-TCPP curriculum initiative <ref type="bibr">[26]</ref> both underscore the significance of integrating parallel and distributed computing (PDC) into undergraduate Computer Science education. The 2023 report advocates for PDC as a core curricular component, while the TCPP initiative actively contributes by offering guidance on essential PDC topics and organizing training workshops for educators to effectively teach PDC concepts in introductory CS courses.</p><p>One approach used for introducing advanced concepts in early courses is the use of CS Unplugged activities, which teach CS concepts without using a computer, often by having the students play the role of computational agent. In this paper, we present an unplugged version of an existing CS1 programming assignment, flag coloring. In the existing assignment <ref type="bibr">[9]</ref>, introductory students practice loops by drawing flags using a library that allows them to set pixel values.</p><p>This work was partially supported by the National Science Foundation through awards OAC-2321020, OAC-2321017, OAC-2321015, and OAC-2321015.</p><p>In our unplugged version, students play the role of the processor by coloring cells of a paper grid to produce the flag. They first do this individually at first to simulate sequential processing. Then they repeat the activity collaboratively, with multiple students working together to color a single flag to simulate parallel processing. This exercise effectively demonstrates several core principles of parallel computing, but does so in a very accessible manner since students are coloring rather than dealing with the complexities of actual parallel code.</p><p>The contributions of this paper are the following:</p><p>&#8226; a description of the novel unplugged flag coloring activity, complete with advice on running it, and &#8226; an evaluation of the activity based on its implementation in CS1 courses at six different institutions in the United States: Hawaii Pacific University (HPU), University of Southern Indiana (USI), Knox College (Knox), Tennessee Tech University (TNTech), Webster College (Webster), and Montclair State University (Montclair).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. RELATED WORK</head><p>Prior attempts have been made to educate inexperienced students about PDC. Mullen et al. <ref type="bibr">[24]</ref> offered a Massive Open Online Course (MOOC) focused on teaching high-performance computing (HPC) to professionals. While MOOCs provide accessibility, learning a complex field like PDC through online resources can be challenging. Universities have also explored incorporating PDC education into their curriculum. Lin <ref type="bibr">[18]</ref> presented a study evaluating the effectiveness of teaching PDC as an elective computer science (CS) course. Their research used surveys and tests to assess student learning. Building on this work, several studies <ref type="bibr">[19][8]</ref>[37] <ref type="bibr">[30]</ref> documented the introduction of PDC into CS courses at junior, senior, and graduate levels. Further, the ACM has recommended including parallel computing as a knowledge area for the undergraduate curriculum. Significant research has explored integrating PDC concepts across the entire CS undergraduate curriculum through a modular approach <ref type="bibr">[5]</ref>[15] <ref type="bibr">[27][6]</ref>[13] <ref type="bibr">[10]</ref>. Other research explores innovative approaches to PDC education, such as incorporating "parallel thinking" concepts into undergraduate curricula and developing and sharing effective teaching materials for PDC courses <ref type="bibr">[20]</ref>[29] <ref type="bibr">[28][35]</ref>.</p><p>Several studies <ref type="bibr">[14]</ref> <ref type="bibr">[23]</ref> have observed that students encounter significant challenges in grasping both the theoretical concepts and the practical aspects of parallel programming. These challenges stem from the inherent complexity of PDC topics. These findings suggest that more effective pedagogical approaches beyond traditional lectures and programming-based instruction are necessary to enhance student learning in PDC.</p><p>Unplugged activities have proven effective in teaching core computer science concepts to younger learners <ref type="bibr">[1]</ref>. Recognizing this potential, researchers have explored the use of unplugged activities to introduce PDC concepts to a larger student populations, including non-CS majors <ref type="bibr">[25]</ref>. Gamebased learning has also emerged as a valuable pedagogical tool. Kitchen et al. <ref type="bibr">[16]</ref> and Bogaerts <ref type="bibr">[2]</ref> successfully employed game-based scenarios where students assume the roles of processors and computational cores, simulating parallel computing processes. Furthermore, Maxim <ref type="bibr">[22]</ref> demonstrated the effectiveness of an unplugged activity in a data structures course (CS2), where students actively participated as processes. These innovative approaches, utilizing unplugged activities and game-based learning, offer promising avenues for enhancing student engagement and understanding of complex PDC concepts. The topic of parallel activities has been studied before; Matthews compiled a repository of computing activities on this topic <ref type="bibr">[21]</ref>. Ghafoor et al. <ref type="bibr">[11]</ref> used the unplugged activity to introduce parallel computing concepts in the CS1/2 courses. This work resulted in a significant improvement in student learning. Recently, Srivastava et al. <ref type="bibr">[33]</ref> and Smith and Srivastava <ref type="bibr">[32]</ref>, successfully implemented active learning unplugged modules in their respective CS1 and CS2 courses. Their work demonstrated the effectiveness of these modules in improving student engagement and fostering a deeper understanding of PDC concepts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. ACTIVITY DESCRIPTION</head><p>Now we describe the activity itself, starting with the core activity and then discussing variations that some of us did.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Core Activity</head><p>The core activity is based around four scenarios where the students color the flag of Mauritius, which has four equallysized stripes colored red, blue, yellow, and green. Other flags can also be used, but we selected this one since it provides a natural subdivision of the task into equal-sized parts for two and four people.</p><p>The students are split into teams of size 5 (with extra students joining to create teams of size 6) or teams of size 2-3 that will merge for the later scenarios. The course staff distributes gridded paper and drawing implements (markers, crayons, and/or bingo daubers) to the students. Each team gets one drawing implement of each color (red, blue, yellow, and green). The instructor also introduces the activity: that the students will be pretending to be a computer coloring the flag of Mauritius by filling in "pixels" of color to complete the image. Specifically, they are told that they will be taking the role of processors and introduced to the idea of the computer having multiple processing units (cores) that can each do work simultaneously.</p><p>Each scenario is explained to the entire class and the students are given time to organize their teams (mainly assigning the roles specific to the scenario). The instructor answers questions before starting all the teams coloring simultaneously. After each scenario, the instructor collects the completion time from each group, posting it publicly.</p><p>The scenarios are depicted in Figure <ref type="figure">1</ref>, each subfigure of which is shown to the students as part of the description of that scenario. In the first scenario, one student colors the entire flag while a second student times them using their cellphone. If desired, this scenario can be repeated a second time since the first run is likely slowed down by the students being unfamiliar with the task. (We discuss the merits of repeating the first scenario in Section III-C.) In the second scenario, two students color the flag, with one student coloring the red and blue stripes while the other colors the yellow and green ones. A third student times them. In the third scenario, four students color the flag, each of them doing one stripe, while a fifth times them. Finally, in the fourth scenario, four students again color the flag, but now each of them is responsible for a vertical slice of the flag which includes part of each stripe. Since each team only has one marker of each color, this requires handing off the markers.</p><p>After all the scenarios are complete, the instructor leads a discussion about what the class observed during the activity, encouraging them toward the lessons discussed in the next section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Prerequisites</head><p>This activity has very limited prerequisites. At Knox College, it was used shortly after introducing the students to the flag coloring programming activity. At this point in the term (week 3 of 9), the students were just learning about loops and had called methods, but had not even learned about conditionals. Even without specific discussion of parallel computing, the students have heard the terms "dual-core" and "quad-core" so telling them that this allows the computer to perform multiple operations simultaneously is not a big stretch. We avoided discussion of how this is managed (processes, threads, etc) and let students observe the potential (and relevant issues) through the activity, but more vocabulary could be introduced if desired.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C. Lessons to Highlight</head><p>There are a number of lessons that students articulated in the discussion after the core activity. The instructor should solicit their observations, but then lead them to any of these ideas that the students miss.</p><p>Since the instructor collects the completion time of each group for each scenario and puts them on the board, students were quick to point out that the times decreased as more processors were added (at least for the first 3 scenarios). Trying to quantify this naturally leads into the concept of speedup and its calculation. The question of what the speedup "should" be leads into the introduction of linear speedup.</p><p>If the first scenario was repeated a second time, the students are also quick to observe that its completion times are significantly better than in the first trial. This is attributable mainly to their getting used to the task and tools during that first run. The instructor can then make an analogy to system warmup, which causes subsequent runs of a program to be faster than the first because of factors such as caching, the system exiting power-saving modes, and just-in-time compilation. Even if the first scenario was not repeated, the students can still be led toward this discussion by observing that the first scenario was particularly slow.</p><p>If different student groups are given different drawing instruments, they are bound to notice that some are better suited to the task. In our experience, daubers were the fastest, followed by thick markers, and then thin markers. Once the students get past the sense of unfairness, these differences also reflect an important issue in performance evaluation: technology differences matter. For example, it is not possible to compare running times on different hardware to evaluate algorithmic or system software differences. Comparisons need to be either between systems that are identical in all respects except the one being compared or they need to be "whole system".</p><p>Comparing the third and fourth scenarios shows that the number of processors is not the only factor affecting performance. When asked to explain the difference between the results for these scenarios, the students were readily able to identify the conflict over drawing implements as the main issue; everyone needed the same color at the beginning and only one person at a time could use it. This is an example of contention, another important PDC concept. It is also possible to introduce the idea of dependencies here; certain cells are dependent on previous ones finishing because the drawing implement will be in use by someone else. It would also be possible to discuss how having extra resources would reduce the contention; the students may have even found this themselves if one group had extra drawing implements.</p><p>The fourth scenario sometimes also exposed the principle of pipelining; an effective coordination strategy is to pass the drawing implements around so that each processor gets the right one at any given moment, mimicking the movement of data through an arithmetic pipeline where the data is being passed between stages as it is needed. From pipelining, it is a small step to seeing that the pipeline takes time to fill (the processors are idle until they get the first implement).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>D. Variations</head><p>Several variations of this activity are possible, and we implemented two specific ones. At Webster University, students studied the impact of more complex flag designs by coloring the French flag (equal vertical stripes of blue, white, and red) and the Canadian flag (a white background with red side stripes and a red maple leaf in the center). To assist with the latter, students were given a gridded paper with the maple leaf outlined (see Figure <ref type="figure">2</ref>). Each flag was colored in two scenarios: one with a single student and one with three students dividing the task.</p><p>The speedup varied between the two flags. The simpler French flag saw greater efficiency gains, while the intricate maple leaf in the Canadian flag slowed progress. This allowed for a discussion of load balancing and its effect on speedup.</p><p>The instructor at Webster also used two multimedia resources as part of the discussion. The first of these were custom-created animations [34] to visualize schedules with different numbers of processors. These visualizations reinforced key concepts by showing the efficiency gains and potential bottlenecks when multiple processors work together.</p><p>The second multimedia resource was a video from NVIDIA meant to compare CPU and GPU execution <ref type="bibr">[31]</ref>. It uses a coloring application as well, but the coloring is done by computer-controlled paintball guns. For CPU drawing, a single barrel is repeatedly aimed and fired to produce one dot at a time. The GPU example uses one barrel per pixel so that the entire image (the Mona Lisa) is drawn in a single shot. This is an extreme example of data parallelism and aligns well with our activity. Since they use the term "GPU", it also provides an opening to talk about GPUs and how they are used for non-graphical data parallel applications.</p><p>At Knox College, the unplugged activity was preceded by students working on the flag coloring programming assignment. They had begun it several days before during a lab period and were approximately halfway through the time between the assignment's release day and its deadline; they could all be assumed to be familiar with the premise of assigning pixel values. This, plus having slightly longer class periods (70 minutes), allowed a followup to the core activity during the same class meeting.</p><p>This follow-up activity introduced the idea of dependencies. The idea of coloring the pixels in parallel is in tension with an important technique for more complicated flags: coloring different elements of the flag in layers. For example, the flag of Great Britain (Figure <ref type="figure">3</ref>) is most easily created by coloring the entire flag blue, then adding the crossing diagonal white lines, and then finally coloring the red vertical and horizontal lines. This approach avoids having to make complicated intersection tests between the flag's different features. (The idea is the same as the Painter's algorithm in 3D graphics, which renders complex scenes by drawing polygons in the order of their distance from the camera.) Unfortunately, this approach also limits parallelism by introducing dependencies: the background must be colored before the diagonals, which must be colored before the rectilinear lines. Since the students had been working on more complicated flags as part of the flag coloring programming assignment, they readily identified this issue and the difference between parallelizing the flag coloring for Mauritius and Great Britain.</p><p>To formalize this idea, the students were given the definition Fig. 4. Flag coloring assignment version of the flag of Jordan of a dependency graph (vertices are tasks and directed edges denote dependencies) and asked to draw one for coloring the flag of Jordan, shown in Figure 4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IV. PRACTICAL ADVICE FOR THE ACTIVITY</head><p>While running the activity a number of times in diverse settings, we have developed some suggestions for how to make it run most smoothly. First of all, it is important for the instructor to complete a "dry run" of the activity with other faculty or with students who are not in the class. Some of the instructions to give students are not easy to convey. This also checks that the drawing implements are appropriate (Are the markers dead? Will they bleed though the paper?). If teaching assistants or other course staff will be running the activity or assisting during the activity, they should be included so they understand student questions.</p><p>We strongly suggest projecting slides with each scenario during the activity to shows the task decomposition. Number the cells to efficiently convey the order in which they should be filled, which is otherwise a tricky concept. Our images are shown in Figure <ref type="figure">1</ref>.</p><p>We also suggest showing the students examples of properly filled cells before the activity. There was a wide variety of how well students colored the grid cells; some completely covered the paper and others added a minimal amount of color to each cell. The class as a whole moved in the later direction during the course of the activity to minimize the tedium of coloring and to reduce the time as they got competitive. We suggest taking a middle road on this, using a back and forth scribble that touches all edges of the cell, but not trying to cover it entirely. This is faster then completely filling a cell while still making it possible to achieve uniformity of time per cell. A nice way to generate sample colored cells is to preserve the results from the instructor's dry run.</p><p>We also feel that it is advantageous to provide students with a variety of drawing implements rather than giving them all equivalent supplies. (Originally, we made this decision by default due to a lack of sufficient supplies of a single type.) Having diverse implements does lead to some complaints in the room since it offends students' sense of fairness, but it does show the effect of different hardware. We also found that the students preferred markers to crayons-the institution that used crayons got many complaints about them in the openended parts of the activity's survey.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>V. ASSESSMENT</head><p>We used various approaches to evaluate this activity, with some differences among the six participating institutions. At some of the institutions, a pre-test survey was administered before conducting the activity, followed by a post-test survey. The pre-and post-survey questions were designed to assess student comprehension of key parallel distributed computing concepts. All six institutions utilized an engagement survey based on the ASPECT (Assessing Student Perspective of Engagement in Class Tool) survey <ref type="bibr">[36]</ref>, which measures student engagement in active-learning exercises, including perceived effort, instructor contribution, and the activity's value. Our survey examined three key aspects: the student experience (their engagement, enjoyment, participation, and focus), their understanding (encompassing comprehension of the material and computing concepts), and instructor effectiveness (preparedness, enthusiasm, and availability). Additionally, at one institution, students' understanding of dependencies was assessed by collecting the dependency graphs they created while coloring the flag of Jordan.</p><p>The following sections discuss each of these measures and their results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Student engagement survey</head><p>Student engagement was measured with a survey administered following the activity. This survey utilized a Likert scale ranging from 1 (Strongly Disagree) to 5 (Strongly Agree). The survey questions are presented in Figure <ref type="figure">5</ref>. The student engagement survey was administered at all six of our institutions.</p><p>The bar chart in Figure <ref type="figure">6</ref> presents the median scores for each question across the various institutions. A further breakdown of the questions follow.</p><p>As shown in Table <ref type="table">I</ref>, the questions include responses to students' engagement in the activity, such as enjoyment, participation, and focus. Students from USI and Webster reported the highest engagement levels (mostly 5.0). Knox consistently had lower engagement scores (&#8672;4.0). Montclair and TNTech had mixed responses, with Montclair scoring lower in stimulating interest in parallel computing.</p><p>As shown in Table <ref type="table">II</ref> the questions include the students' perceived learning of concepts through discussion, group work, and activities. Webster and USI again show the highest Fig. 6. Student Engagement Survey Questions</p><p>As shown in Table <ref type="table">III</ref>, the questions assess students' perceptions of the instructor's preparedness, enthusiasm, and availability. Instructor ratings were consistently high (mostly 5.0) in all universities except Knox (4.0). The NA in Table <ref type="table">III</ref> includes the "NA" (Not applicable) for certain questions, indicating that Webster University did not include these questions in the survey.</p><p>Question HPU Knox Montclair TNTech USI Webster The instructor seemed prepared for the activity 5.0 4.0 5.0 5.0 5.0 5.0 The instructor put effort into my learning 5.0 4.0 5.0 5.0 5.0 NA The instructor's enthusiasm made me more interested in the activity 5.0 4.0 5.0 5.0 5.0 NA The instructor and/or TAs were available to answer questions 5.0 4.0 5.0 5.0 5.0 NA</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>TABLE III MEDIAN SCORES FOR INSTRUCTOR-RELATED QUESTIONS</head><p>The student engagement survey also two openended questions, asking students to share the most interesting thing they learned from the activity and suggest improvements to the activity for future classes.</p><p>1) Summary of student comments on what was the most interesting thing they learned from the activity: Student feedback highlighted several key takeaways from the Flag Maker activity related to parallel computing concepts. Many students commented that they better understood how parallel computing operates, particularly that adding more processors does not always result in increased efficiency. Several responses emphasized the concept of diminishing returns, noting that excessive parallelization can lead to resource contention and even slowdowns. The students also appreciated the hands-on nature of the activity, stating that it helped them visualize and better grasp parallel computing principles in a fun and engaging manner. Others mentioned learning about workload distribution, task synchronization, and coordination challenges among multiple processors. Some students reflected on the complexity of parallel processing, recognizing that effective parallelism requires careful planning and appropriate task allocation. A few students reported that they were already familiar with parallel computing concepts, while others expressed interest in applying their new knowledge to programming. In addition, some responses focused on the collaborative aspect of the activity, drawing parallels between teamwork and multiprocessor computing.</p><p>2) Student feedback on improving the Flag Maker activity highlighted several recurring themes: Many students requested better quality crayons or alternative coloring tools, such as markers, to avoid breakage and improve usability. Some students suggested modifying the activity structure, including making the tasks more engaging, incorporating more problem-solving elements, or integrating coding exercises to better connect with computing concepts. Others recommended making the activity shorter to avoid redundancy. Several responses emphasized the need for clearer instructions and explanations, particularly on how the activity relates to computing topics like pipelining and parallel processing. Some students requested that key vocabulary be introduced during the activity. There were also calls for larger paper sizes, improved classroom setup to enhance collaboration, and better Fig. <ref type="figure">7</ref>. Pre-Post Test Questions organization of group work to ensure smoother participation. Some students suggested making the activity more interactive, possibly incorporating a competitive element such as leaderboards or timed challenges. Finally, some students stated that the activity worked well and did not require significant changes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Pre/Post Test Analysis of Student Learning</head><p>Before starting the Flag Maker activity, students from many of the universities were given a pre-test quiz consisting of five multiple-choice and true/false questions to assess their basic understanding of task decomposition, speedup, contention, scalability, and pipelining. After completing the activity, the same questions were administered in a post-test quiz. These quizzes were designed to evaluate the learning outcomes of the activity. Figure <ref type="figure">7</ref> presents the list of multiple choice and true/false questions on the test given to the students before and after the activity.</p><p>1) Summary of Pre-and Post-Quiz Results Across USI, TN Tech, and HPU: Figure <ref type="figure">8</ref> summarizes the pre-and postquiz results from three universities, USI, TNTech, and HPU, assessing students' understanding of key parallel and distributed computing concepts. The analysis highlights knowledge retention, learning gains, and areas where students struggled the most. Scalability and Speedup demonstrated strong retention across institutions, reflecting a solid foundational understanding among students. Conversely, Contention and Pipelining revealed lower initial comprehension, significant incorrect retention, and knowledge loss after the quiz. These findings highlight the need for targeted instructional interventions to improve students' conceptual grasp of these parallel and distributed computing concepts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C. Analysis of Dependency Graphs</head><p>As noted above, students at Knox were not given the pre/post test for student learning. Instead, their learning about dependencies during their followup activity was assessed by collecting dependency graphs they drew for parallel coloring of the flag of Jordan (see Figure <ref type="figure">4</ref>). They were given the last Fig. 9. Dependency graph for coloring the flag of Jordan</p><p>few minutes of the class period to complete their drawing. At the end of the period, following the procedure approved by IRB, they were asked to submit their work but told that it was voluntary and there was no effect on their grade either way.</p><p>With this procedure, we collected 29 drawings from a class of 65 (45% response rate) split into three sections. Sixty-five is the total class size because we do not know how many students attended that day. We also believe the response rate was artificially suppressed by the first of the three sections, which had less time for the drawing activity due to other parts of the activity running longer; this section only submitted 4 drawings.</p><p>All student submissions were examined to evaluate the level of student understanding demonstrated. Our intended solution for the problem is shown in Figure <ref type="figure">9</ref>; the stripes form the first layer and must be drawn first, followed by the red triangle, and then the white dot (a star in the actual flag). We do not consider this a difficult problem, and it is similar to the dependency graph for coloring the flag of Great Britain, which was shown as an example, but completing it does demonstrate an understanding of when tasks are dependent.</p><p>When evaluating student submissions, we counted the graph as correct if it omitted the box for drawing the white stripe; in the programming version of flag coloring that students had been doing, the background is initially white so a white stripe can be achieved by not drawing anything. Some students were definitely thinking along these lines because they started with a task to draw the white stripe and crossed it out.</p><p>Another variation we saw from some students (5=14%) was splitting the red triangle into two parts. This is again consistent with how they were creating this kind of triangle in the programming assignment (split horizontally into two right triangles). The latter actually complicates the dependency graph because the top triangle should be independent of the green stripe and the bottom triangle should be independent of the black stripe. None of the students reflected this in their graph, but we still count them as "mostly correct" since the true correct answer with a split triangle is significantly more complicated than without it.</p><p>Of the submissions, 10 (34%) were perfectly correct. Seven (24%) more were mostly correct; these include the 5 mentioned above who split the triangle, one who used one task for all the stripes, and another who suggested the dependencies spatially but omitted the arrows.</p><p>The most common error for the remaining students was to give a linear chain of tasks. This suggests that they either thought about the graph in terms of sequential code or misunderstood the meaning of a dependency. There were a couple of incomplete submissions, though they all seemed to be working toward a linear solution as well. There were also a few students (4=14%) who did not demonstrate any learning; they drew the flag or started giving code to draw it.</p><p>The students who were at least mostly correct made up 59% of the respondents. Because this level was achieved with a single example, we suggest that a small amount of additional time (and examples) would suffice to teach the concept.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VI. DISCUSSION AND FUTURE WORK</head><p>Overall, the flag coloring unplugged activity provides an engaging introduction to parallel and distributed computing. The students appreciate its active nature and we believe that students seeing examples of parallel concepts in practice makes it easier for instructors to teach those concepts. Specifically, the students are exposed to speedup, system warmup, how hardware differences can make results incomparable, and the challenges presented by interprocessor communication and resource management.</p><p>While we are satisfied with the core activity, we plan to enhance the supporting components to improve student learning further. More instructors intend to incorporate the video shown at Webster, which demonstrates data parallelism through coloring. This video may become a pre-activity assignment to introduce key concepts or a post-activity reinforcement that connects data parallelism to GPU processing. Additionally, we aim to expand the discussion of dependencies, as implemented at Knox, to provide a deeper understanding while maintaining the overall brevity of the activity. Furthermore, with continued</p></div></body>
		</text>
</TEI>
