Summary Previous research has linked rapid eye movement sleep to emotional processing, particularly stress. Lab studies indicate that rapid eye movement sleep deprivation and fragmentation heighten emotional reactivity and stress response. This relationship extends to natural settings, where poor‐quality sleep among college students correlates with increased academic stress and lower academic performance. However, there is a lack of research into how specific sleep stages, like rapid eye movement, affect real‐life stress development. This study investigated whether habitual rapid eye movement sleep in college students can predict the future development of real‐life stress symptoms associated with final exams. Fifty‐two participants (mean age = 19 years, 62% females) monitored their sleep for a week during the academic semester using a mobile electroencephalogram device, and then completed self‐evaluations measuring test anxiety and other relevant factors. They completed the same evaluations again just prior to final exams. We found that rapid eye movement sleep was the most dominant factor predicting changes in participants' test anxiety. However, contrasting with our predictions, habitual rapid eye movement sleep was associated with an increase rather than decrease in anxiety. We discuss these results in terms of the rapid eye movement recalibration hypothesis, which suggests rapid eye movement sleep modulates activity in stress‐encoding areas in the brain, leading to both decreased sensitivity and increased selectivity of stress responses.
more »
« less
Can we predict stressful technical interview settings through eye-tracking?
Recently, eye-tracking analysis for finding the cognitive load and stress while problem-solving on the whiteboard during a technical interview is finding its way in software engineering society. However, there is no empirical study on analyzing how much the interview setting characteristics affect the eye-movement measurements. Without knowing that, the results of a research on eye-movement measurements analysis for stress detection will not be reliable. In this paper, we analyzed the eye-movements of 11 participants in two interview settings, one on the whiteboard and the other on the paper, to find out if the characteristics of the interview settings affect the analysis of participants' stress. To this end, we applied 7 Machine Learning classification algorithms on three different labeling strategies of the data to suggest researchers of the domain a useful practice of checking the reliability of the eye-measurements before reporting any results.
more »
« less
- Award ID(s):
- 1755762
- PAR ID:
- 10085744
- Date Published:
- Journal Name:
- Proceedings of the Workshop on Eye Movements in Programming (EMIP'18)
- Page Range / eLocation ID:
- 1 to 5
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Software engineering candidates commonly participate in whiteboard technical interviews as part of a hiring assessment. During these sessions, candidates write code while thinking aloud as they work towards a solution, under the watchful eye of an interviewer. While technical interviews should allow for an unbiased and inclusive assessment of problem-solving ability, surprisingly, technical interviews may be instead a procedure for identifying candidates who best handle and migrate stress solely caused by being examined by an interviewer (performance anxiety). To understand if coding interviews---as administered today---can induce stress that significantly hinders performance, we conducted a randomized controlled trial with 48 Computer Science students, comparing them in private and public whiteboard settings. We found that performance is reduced by more than half, by simply being watched by an interviewer. We also observed that stress and cognitive load were significantly higher in a traditional technical interview when compared with our private interview. Consequently, interviewers may be filtering out qualified candidates by confounding assessment of problem-solving ability with unnecessary stress. We propose interview modifications to make problem-solving assessment more equitable and inclusive, such as through private focus sessions and retrospective think-aloud, allowing companies to hire from a larger and diverse pool of talent.more » « less
-
BackgroundUpper limb proprioceptive impairments are common after stroke and affect daily function. Recent work has shown that stroke survivors have difficulty using visual information to improve proprioception. It is unclear how eye movements are impacted to guide action of the arm after stroke. Here, we aimed to understand how upper limb proprioceptive impairments impact eye movements in individuals with stroke. MethodsControl (N = 20) and stroke participants (N = 20) performed a proprioceptive matching task with upper limb and eye movements. A KINARM exoskeleton with eye tracking was used to assess limb and eye kinematics. The upper limb was passively moved by the robot and participants matched the location with either an arm or eye movement. Accuracy was measured as the difference between passive robot movement location and active limb matching (Hand-End Point Error) or active eye movement matching (Eye-End Point Error). ResultsWe found that individuals with stroke had significantly larger Hand (2.1×) and Eye-End Point (1.5×) Errors compared to controls. Further, we found that proprioceptive errors of the hand and eye were highly correlated in stroke participants ( r = .67, P = .001), a relationship not observed for controls. ConclusionsEye movement accuracy declined as a function of proprioceptive impairment of the more-affected limb, which was used as a proprioceptive reference. The inability to use proprioceptive information of the arm to coordinate eye movements suggests that disordered proprioception impacts integration of sensory information across different modalities. These results have important implications for how vision is used to actively guide limb movement during rehabilitation.more » « less
-
In the realm of virtual reality (VR) research, the synergy of methodological advancements, technical innovation, and novel applications is paramount. Our work encapsulates these facets in the context of spatial ability assessments conducted within a VR environment. This paper presents a comprehensive and integrated framework of VR, eye-tracking, and electroencephalography (EEG), which seamlessly combines measuring participants’ behavioral performance and simultaneously collecting time-stamped eye tracking and EEG data to enable understanding how spatial ability is impacted in certain conditions and if such conditions demand increased attention and mental allocation. This framework encompasses the measurement of participants’ gaze pattern (e.g., fixation and saccades), EEG data (e.g., Alpha, Beta, Gamma, and Theta wave patterns), and psychometric and behavioral test performance. On the technical front, we utilized the Unity 3D game engine as the core for running our spatial ability tasks by simulating altered conditions of space exploration. We simulated two types of space exploration conditions: (1) microgravity condition in which participants’ idiotropic (body) axis is in statically and dynamically misaligned with their visual axis; and (2) conditions of Martian terrain that offers a visual frame of reference (FOR) but with limited and unfamiliar landmarks objects. We specifically targeted assessing human spatial ability and spatial perception. To assess spatial ability, we digitalized behavioral tests of Purdue Spatial Visualization Test: Rotations (PSVT: R), the Mental Cutting Test (MCT), and the Perspective Taking Ability (PTA) test and integrated them into the VR settings to evaluate participants’ spatial visualization, spatial relations, and spatial orientation ability, respectively. For spatial perception, we applied digitalized versions of size and distance perception tests to measure participants’ subjective perception of size and distance. A suite of C# scripts orchestrated the VR experience, enabling real-time data collection and synchronization. This technical innovation includes the integration of data streams from diverse sources, such as VIVE controllers, eye-tracking devices, and EEG hardware, to ensure a cohesive and comprehensive dataset. A pivotal challenge in our research was synchronizing data from EEG, eye tracking, and VR tasks to facilitate comprehensive analysis. To address this challenge, we employed the Unity interface of the OpenSync library, a tool designed to unify disparate data sources in the fields of psychology and neuroscience. This approach ensures that all collected measures share a common time reference, enabling meaningful analysis of participant performance, gaze behavior, and EEG activity. The Unity-based system seamlessly incorporates task parameters, participant data, and VIVE controller inputs, providing a versatile platform for conducting assessments in diverse domains. Finally, we were able to collect synchronized measurements of participants’ scores on the behavioral tests of spatial ability and spatial perception, their gaze data and EEG data. In this paper, we present the whole process of combining the eye-tracking and EEG workflows into the VR settings and collecting relevant measurements. We believe that our work not only advances the state-of-the-art in spatial ability assessments but also underscores the potential of virtual reality as a versatile tool in cognitive research, therapy, and rehabilitation.more » « less
-
Abstract This manuscript presents GazeBase, a large-scale longitudinal dataset containing 12,334 monocular eye-movement recordings captured from 322 college-aged participants. Participants completed a battery of seven tasks in two contiguous sessions during each round of recording, including a – (1) fixation task, (2) horizontal saccade task, (3) random oblique saccade task, (4) reading task, (5/6) free viewing of cinematic video task, and (7) gaze-driven gaming task. Nine rounds of recording were conducted over a 37 month period, with participants in each subsequent round recruited exclusively from prior rounds. All data was collected using an EyeLink 1000 eye tracker at a 1,000 Hz sampling rate, with a calibration and validation protocol performed before each task to ensure data quality. Due to its large number of participants and longitudinal nature, GazeBase is well suited for exploring research hypotheses in eye movement biometrics, along with other applications applying machine learning to eye movement signal analysis. Classification labels produced by the instrument’s real-time parser are provided for a subset of GazeBase, along with pupil area.more » « less
An official website of the United States government

