skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: CogBeacon: A Multi-Modal Dataset and Data-Collection Platform for Modeling Cognitive Fatigue
In this work, we present CogBeacon, a multi-modal dataset designed to target the effects of cognitive fatigue in human performance. The dataset consists of 76 sessions collected from 19 male and female users performing different versions of a cognitive task inspired by the principles of the Wisconsin Card Sorting Test (WCST), a popular cognitive test in experimental and clinical psychology designed to assess cognitive flexibility, reasoning, and specific aspects of cognitive functioning. During each session, we record and fully annotate user EEG functionality, facial keypoints, real-time self-reports on cognitive fatigue, as well as detailed information of the performance metrics achieved during the cognitive task (success rate, response time, number of errors, etc.). Along with the dataset we provide free access to the CogBeacon data-collection software to provide a standardized mechanism to the community for collecting and annotating physiological and behavioral data for cognitive fatigue analysis. Our goal is to provide other researchers with the tools to expand or modify the functionalities of the CogBeacon data-collection framework in a hardware-independent way. As a proof of concept we show some preliminary machine learning-based experiments on cognitive fatigue detection using the EEG information and the subjective user reports as ground truth. Our experiments highlight the meaningfulness of the current dataset, and encourage our efforts towards expanding the CogBeacon platform. To our knowledge, this is the first multi-modal dataset specifically designed to assess cognitive fatigue and the only free software available to allow experiment reproducibility for multi-modal cognitive fatigue analysis.  more » « less
Award ID(s):
1719031
PAR ID:
10467947
Author(s) / Creator(s):
; ;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Technologies
Volume:
7
Issue:
2
ISSN:
2227-7080
Page Range / eLocation ID:
46
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Recent research in empirical software engineering is applying techniques from neurocognitive science and breaking new grounds in the ways that researchers can model and analyze the cognitive processes of developers as they interact with software artifacts. However, given the novelty of this line of research, only one tool exists to help researchers represent and analyze this kind of multi-modal biometric data. While this tool does help with visualizing temporal eyetracking and physiological data, it does not allow for the mapping of physiological data to source code elements, instead projecting information over images of code. One drawback of this is that researchers are still unable to meaningfully combine and map physiological and eye tracking data to source code artifacts. The use of images also bars the support of long or multiple code files, which prevents researchers from analyzing data from experiments conducted in realistic settings. To address these drawbacks, we propose VITALSE, a tool for the interactive visualization of combined multi-modal biometric data for software engineering tasks. VITALSE provides interactive and customizable temporal heatmaps created with synchronized eyetracking and biometric data. The tool supports analysis on multiple files, user defined annotations for points of interest over source code elements, and high level customizable metric summaries for the provided dataset. VITALSE, a video demonstration, and sample data to demonstrate its capabilities can be found at http://www.vitalse.app. 
    more » « less
  2. Studying group dynamics requires fine-grained spatial and temporal understanding of human behavior. Social psychologists studying human interaction patterns in face-to-face group meetings often find themselves struggling with huge volumes of data that require many hours of tedious manual coding. There are only a few publicly available multi-modal datasets of face-to-face group meetings that enable the development of automated methods to study verbal and non-verbal human behavior. In this paper, we present a new, publicly available multi-modal dataset for group dynamics study that differs from previous datasets in its use of ceiling-mounted, unobtrusive depth sensors. These can be used for fine-grained analysis of head and body pose and gestures, without any concerns about participants' privacy or inhibited behavior. The dataset is complemented by synchronized and time-stamped meeting transcripts that allow analysis of spoken content. The dataset comprises 22 group meetings in which participants perform a standard collaborative group task designed to measure leadership and productivity. Participants' post-task questionnaires, including demographic information, are also provided as part of the dataset. We show the utility of the dataset in analyzing perceived leadership, contribution, and performance, by presenting results of multi-modal analysis using our sensor-fusion algorithms designed to automatically understand audio-visual interactions. 
    more » « less
  3. In the realm of virtual reality (VR) research, the synergy of methodological advancements, technical innovation, and novel applications is paramount. Our work encapsulates these facets in the context of spatial ability assessments conducted within a VR environment. This paper presents a comprehensive and integrated framework of VR, eye-tracking, and electroencephalography (EEG), which seamlessly combines measuring participants’ behavioral performance and simultaneously collecting time-stamped eye tracking and EEG data to enable understanding how spatial ability is impacted in certain conditions and if such conditions demand increased attention and mental allocation. This framework encompasses the measurement of participants’ gaze pattern (e.g., fixation and saccades), EEG data (e.g., Alpha, Beta, Gamma, and Theta wave patterns), and psychometric and behavioral test performance. On the technical front, we utilized the Unity 3D game engine as the core for running our spatial ability tasks by simulating altered conditions of space exploration. We simulated two types of space exploration conditions: (1) microgravity condition in which participants’ idiotropic (body) axis is in statically and dynamically misaligned with their visual axis; and (2) conditions of Martian terrain that offers a visual frame of reference (FOR) but with limited and unfamiliar landmarks objects. We specifically targeted assessing human spatial ability and spatial perception. To assess spatial ability, we digitalized behavioral tests of Purdue Spatial Visualization Test: Rotations (PSVT: R), the Mental Cutting Test (MCT), and the Perspective Taking Ability (PTA) test and integrated them into the VR settings to evaluate participants’ spatial visualization, spatial relations, and spatial orientation ability, respectively. For spatial perception, we applied digitalized versions of size and distance perception tests to measure participants’ subjective perception of size and distance. A suite of C# scripts orchestrated the VR experience, enabling real-time data collection and synchronization. This technical innovation includes the integration of data streams from diverse sources, such as VIVE controllers, eye-tracking devices, and EEG hardware, to ensure a cohesive and comprehensive dataset. A pivotal challenge in our research was synchronizing data from EEG, eye tracking, and VR tasks to facilitate comprehensive analysis. To address this challenge, we employed the Unity interface of the OpenSync library, a tool designed to unify disparate data sources in the fields of psychology and neuroscience. This approach ensures that all collected measures share a common time reference, enabling meaningful analysis of participant performance, gaze behavior, and EEG activity. The Unity-based system seamlessly incorporates task parameters, participant data, and VIVE controller inputs, providing a versatile platform for conducting assessments in diverse domains. Finally, we were able to collect synchronized measurements of participants’ scores on the behavioral tests of spatial ability and spatial perception, their gaze data and EEG data. In this paper, we present the whole process of combining the eye-tracking and EEG workflows into the VR settings and collecting relevant measurements. We believe that our work not only advances the state-of-the-art in spatial ability assessments but also underscores the potential of virtual reality as a versatile tool in cognitive research, therapy, and rehabilitation. 
    more » « less
  4. Abstract We present a standalone Matlab software platform complete with visualization for the reconstruction of the neural activity in the brain from MEG or EEG data. The underlying inversion combines hierarchical Bayesian models and Krylov subspace iterative least squares solvers. The Bayesian framework of the underlying inversion algorithm allows to account for anatomical information and possible a priori belief about the focality of the reconstruction. The computational efficiency makes the software suitable for the reconstruction of lengthy time series on standard computing equipment. The algorithm requires minimal user provided input parameters, although the user can express the desired focality and accuracy of the solution. The code has been designed so as to favor the parallelization performed automatically by Matlab, according to the resources of the host computer. We demonstrate the flexibility of the platform by reconstructing activity patterns with supports of different sizes from MEG and EEG data. Moreover, we show that the software reconstructs well activity patches located either in the subcortical brain structures or on the cortex. The inverse solver and visualization modules can be used either individually or in combination. We also provide a version of the inverse solver that can be used within Brainstorm toolbox. All the software is available online by Github, including the Brainstorm plugin, with accompanying documentation and test data. 
    more » « less
  5. Cognitive Fatigue (CF) is the decline in cognitive abilities due to prolonged exposure to mentally demanding tasks. In this paper, we used gait cycle analysis, a biometric method related to human locomotion to identify cognitive fatigue in individuals. The proposed system in this paper takes two asynchronous videos of the gait of individuals to classify if they are cognitively fatigued or not. We leverage the pose estimation library OpenPose, to extract the body keypoints from the frames in the videos. To capture the spatial and temporal information of the gait cycle, a CNN-based model is used in the system to extract the embedded features which are then used to classify the cognitive fatigue level of individuals. To train and test the model, a gait dataset is built from 21 participants by collecting walking data before and after inducing cognitive fatigue using clinically used games. The proposed model can classify cognitive fatigue from the gait data of an individual with an accuracy of 81%. 
    more » « less