skip to main content

Title: Exploring relationships between electrodermal activity, skin temperature, and performance during engineering exams
Students' academic learning, performance, and motivation are ongoing topics in engineering education. Those studies that have attempted to understand the mechanisms of motivation in authentic classroom settings and scenarios are few and limited to the methods used (e.g., self-reports, observations). This Work-in-Progress study explores the utility of electrodermal activity (EDA) and temperature sensors in accurately informing scholars about student performance during an exam in real-time. Correlations between each factor were analyzed. Initial results suggest that peripheral skin temperature has a weak, positive but significant correlation to exam question difficulty r=0.08; p<; 0.001). Also, electrodermal activity and temperature showed a weak, positive, but significant correlation (r=0.13; p<; 0.0010.05). The electrodermal activity showed a weak, positive, but significant correlation to exam question difficulty (r=0.16; p<; 0.0010.01). Also, skin temperature correlations with difficulty index (did not) changed across semesters (r=0.18; p<; 0.0010.001). We also developed a multiple regression model and found moderately significant relationships between EDA, difficulty index, and skin temperature (r=0.45; p<; 0.0010.05). The findings suggest that performance is tied to physiological responses among students during exam taking, indicating a possible connection between emotions and cognition via physiology.
Authors:
Award ID(s):
1661117
Publication Date:
NSF-PAR ID:
10298660
Journal Name:
Proceedings Frontiers in Education Conference
ISSN:
0190-5848
Sponsoring Org:
National Science Foundation
More Like this
  1. Students' academic learning, performance, and motivation are ongoing topics in engineering education. Those studies that have attempted to understand the mechanisms of motivation in authentic classroom settings and scenarios are few and limited to the methods used (e.g., self-reports, observations). This Work-in-Progress study explores the utility of electrodermal activity (EDA) and temperature sensors in accurately informing scholars about student performance during an exam in real-time. Correlations between each factor were analyzed. Initial results suggest that peripheral skin temperature has a weak, positive but significant correlation to exam question difficulty r=0.08; p<; 0.001). Also, electrodermal activity and temperature showed a weak,more »positive, but significant correlation (r=0.13; p<; 0.0010.05). The electrodermal activity showed a weak, positive, but significant correlation to exam question difficulty (r=0.16; p<; 0.0010.01). Also, skin temperature correlations with difficulty index (did not) changed across semesters (r=0.18; p<; 0.0010.001). We also developed a multiple regression model and found moderately significant relationships between EDA, difficulty index, and skin temperature (r=0.45; p<; 0.0010.05). The findings suggest that performance is tied to physiological responses among students during exam taking, indicating a possible connection between emotions and cognition via physiology.« less
  2. The current study examined the neural correlates of spatial rotation in eight engineering undergraduates. Mastering engineering graphics requires students to mentally visualize in 3D and mentally rotate parts when developing 2D drawings. Students’ spatial rotation skills play a significant role in learning and mastering engineering graphics. Traditionally, the assessment of students’ spatial skills involves no measurements of neural activity during student performance of spatial rotation tasks. We used electroencephalography (EEG) to record neural activity while students performed the Revised Purdue Spatial Visualization Test: Visualization of Rotations (Revised PSVT:R). The two main objectives were to 1) determine whether high versus lowmore »performers on the Revised PSVT:R show differences in EEG oscillations and 2) identify EEG oscillatory frequency bands sensitive to item difficulty on the Revised PSVT:R.  Overall performance on the Revised PSVT:R determined whether participants were considered high or low performers: students scoring 90% or higher were considered high performers (5 students), whereas students scoring under 90% were considered low performers (3 students). Time-frequency analysis of the EEG data quantified power in several oscillatory frequency bands (alpha, beta, theta, gamma, delta) for comparison between low and high performers, as well as between difficulty levels of the spatial rotation problems.   Although we did not find any significant effects of performance type (high, low) on EEG power, we observed a trend in reduced absolute delta and gamma power for hard problems relative to easier problems. Decreases in delta power have been reported elsewhere for difficult relative to easy arithmetic calculations, and attributed to greater external attention (e.g., attention to the stimuli/numbers), and consequently, reduced internal attention (e.g., mentally performing the calculation). In the current task, a total of three spatial objects are presented. An example rotation stimulus is presented, showing a spatial object before and after rotation. A target stimulus, or spatial object before rotation is then displayed. Students must choose one of five stimuli (multiple choice options) that indicates the correct representation of the object after rotation. Reduced delta power in the current task implies that students showed greater attention to the example and target stimuli for the hard problem, relative to the moderate and easy problems. Therefore, preliminary findings suggest that students are less efficient at encoding the target stimuli (external attention) prior to mental rotation (internal attention) when task difficulty increases.  Our findings indicate that delta power may be used to identify spatial rotation items that are especially challenging for students. We may then determine the efficacy of spatial rotation interventions among engineering education students, using delta power as an index for increases in internal attention (e.g., increased delta power). Further, in future work, we will also use eye-tracking to assess whether our intervention decreases eye fixation (e.g., time spent viewing) toward the target stimulus on the Revised PSVT:R. By simultaneously using EEG and eye-tracking, we may identify changes in internal attention and encoding of the target stimuli that are predictive of improvements in spatial rotation skills among engineering education students. « less
  3. In teaching mechanics, we use multiple representations of vectors to develop concepts and analysis techniques. These representations include pictorials, diagrams, symbols, numbers and narrative language. Through years of study as students, researchers, and teachers, we develop a fluency rooted in a deep conceptual understanding of what each representation communicates. Many novice learners, however, struggle to gain such understanding and rely on superficial mimicry of the problem solving procedures we demonstrate in examples. The term representational competence refers to the ability to interpret, switch between, and use multiple representations of a concept as appropriate for learning, communication and analysis. In engineeringmore »statics, an understanding of what each vector representation communicates and how to use different representations in problem solving is important to the development of both conceptual and procedural knowledge. Science education literature identifies representational competence as a marker of true conceptual understanding. This paper presents development work for a new assessment instrument designed to measure representational competence with vectors in an engineering mechanics context. We developed the assessment over two successive terms in statics courses at a community college, a medium-sized regional university, and a large state university. We started with twelve multiple-choice questions that survey the vector representations commonly employed in statics. Each question requires the student to interpret and/or use two or more different representations of vectors and requires no calculation beyond single digit integer arithmetic. Distractor answer choices include common student mistakes and misconceptions drawn from the literature and from our teaching experience. We piloted these twelve questions as a timed section of the first exam in fall 2018 statics courses at both Whatcom Community College (WCC) and Western Washington University. Analysis of students’ unprompted use of vector representations on the open-ended problem-solving section of the same exam provides evidence of the assessment’s validity as a measurement instrument for representational competence. We found a positive correlation between students’ accurate and effective use of representations and their score on the multiple choice test. We gathered additional validity evidence by reviewing student responses on an exam wrapper reflection. We used item difficulty and item discrimination scores (point-biserial correlation) to eliminate two questions and revised the remaining questions to improve clarity and discriminatory power. We administered the revised version in two contexts: (1) again as part of the first exam in the winter 2019 Statics course at WCC, and (2) as an extra credit opportunity for statics students at Utah State University. This paper includes sample questions from the assessment to illustrate the approach. The full assessment is available to interested instructors and researchers through an online tool.« less
  4. In teaching mechanics, we use multiple representations of vectors to develop concepts and analysis techniques. These representations include pictorials, diagrams, symbols, numbers and narrative language. Through years of study as students, researchers, and teachers, we develop a fluency rooted in a deep conceptual understanding of what each representation communicates. Many novice learners, however, struggle to gain such understanding and rely on superficial mimicry of the problem solving procedures we demonstrate in examples. The term representational competence refers to the ability to interpret, switch between, and use multiple representations of a concept as appropriate for learning, communication and analysis. In engineeringmore »statics, an understanding of what each vector representation communicates and how to use different representations in problem solving is important to the development of both conceptual and procedural knowledge. Science education literature identifies representational competence as a marker of true conceptual understanding. This paper presents development work for a new assessment instrument designed to measure representational competence with vectors in an engineering mechanics context. We developed the assessment over two successive terms in statics courses at a community college, a medium-sized regional university, and a large state university. We started with twelve multiple-choice questions that survey the vector representations commonly employed in statics. Each question requires the student to interpret and/or use two or more different representations of vectors and requires no calculation beyond single digit integer arithmetic. Distractor answer choices include common student mistakes and misconceptions drawn from the literature and from our teaching experience. We piloted these twelve questions as a timed section of the first exam in fall 2018 statics courses at both Whatcom Community College (WCC) and Western Washington University. Analysis of students’ unprompted use of vector representations on the open-ended problem-solving section of the same exam provides evidence of the assessment’s validity as a measurement instrument for representational competence. We found a positive correlation between students’ accurate and effective use of representations and their score on the multiple choice test. We gathered additional validity evidence by reviewing student responses on an exam wrapper reflection. We used item difficulty and item discrimination scores (point-biserial correlation) to eliminate two questions and revised the remaining questions to improve clarity and discriminatory power. We administered the revised version in two contexts: (1) again as part of the first exam in the winter 2019 Statics course at WCC, and (2) as an extra credit opportunity for statics students at Utah State University. This paper includes sample questions from the assessment to illustrate the approach. The full assessment is available to interested instructors and researchers through an online tool.« less
  5. In engineering, students’ completion of prerequisites indicates an understanding of fundamental knowledge. Recent studies have shown a significant relationship between student performance and prior knowledge. Weak knowledge retention from prerequisite coursework can present challenges in progressive learning. This study investigates the relationship between prior knowledge and students’ performance over a few courses of Statics. Statistics has been considered as the subject of interest since it is the introductory engineering course upon which many subsequent engineering courses rely, including many engineering analysis and design courses. The prior knowledge was determined based on the quantitative and qualitative preparedness. A quiz set wasmore »designed to assess quantitative preparedness. The qualitative preparedness was assessed using a survey asking students’ subjective opinions about their preparedness at the beginning of the semester. Student performance was later quantified through final course grades. Each set of data were assigned three categories for grouping purposes to reflect preparedness: 1) high preparedness: 85% or higher score, 2) medium preparedness: between 60% and 85%, and 3) weak preparedness: 60% or lower. Pearson correlation coefficient and T-test was conducted on 129 students for linear regression and differences in means. The analysis revealed a non-significant correlation between the qualitative preparedness and final scores (p-value = 0.29). The data revealed that students underestimated their understanding of the prerequisites for the class, since the quantitative preparedness scores were relatively higher than the qualitative preparedness scores. This can be partially understood by the time gap between when prerequisites were taken and when the course under investigation was taken. Students may have felt less confident at first but were able to pick up the required knowledge quickly. A moderately significant correlation between students’ quantitative preparedness and course performance was observed (p -value < 0.05). Students with high preparedness showed > 80% final scores, with a few exceptions; students with weak preparedness also showed relatively high final scores. However, most of the less prepared students made significant efforts to overcome their weaknesses through continuous communication and follow-up with the instructor. Despite these efforts, these students could not obtain higher than 90% as final scores, which indicates that level of preparedness reflects academic excellence. Overall, this study highlights the role of prior knowledge in achieving academic excellence for engineering. The study is useful to Civil Engineering instructors to understand the role of students’ previous knowledge in their understanding of difficult engineering concepts.« less