skip to main content


Title: Wayfinding Information Cognitive Load Classification based on Functional Near-Infrared Spectroscopy (fNIRS)
Amid the rapid development of building information technologies, wayfinding information has become more accessible to building users and first responders. As a result, a realistic risk of cognitive load related to the wayfinding information processing starts to emerge. As cognition-driven adaptive wayfinding information systems become increasingly captivated to overcome challenges of cognition overload due to overwhelming information, a practical and non-invasive method to monitor and classify cognitive loads during the processing of wayfinding information is needed. This paper tests a Functional Near-Infrared Spectroscopy (fNIRS) based method to identify cognitive load related to wayfinding information processing. It provides a holistic fNIRS signal analytical pipeline to extract hemodynamic response features in the prefrontal cortex (PFC) for cognitive load classification. A human-subject experiment (N=15) based on the Sternberg working memory test was performed to model the relationship between fNIRS features and cognitive load. Personalized models were also evaluated to capture individual differences and identify unique contributing features to each person. The results find that fNIRS-based model can help classify cognitive load changes driven by the different levels of task difficulty with satisfactory performance (avg. accuracy rate 70.02±4.41 percent). The findings also demonstrate that personalized models, instead of universal models, are needed for classifying cognitive load based on neuroimaging data. fNIRS has demonstrated comparable advantages over other neuroimaging methods in cognitive load classification given its robustness to motion artifacts and the satisfactory predictability.  more » « less
Award ID(s):
1937878
NSF-PAR ID:
10234015
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of computing in civil engineering
ISSN:
1943-5487
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Firefighters are often exposed to extensive wayfinding information in various formats owing to the increasing complexity of the built environment. Because of the individual differences in processing assorted types of information, a personalized cognition-driven intelligent system is necessary to reduce the cognitive load and improve the performance in the wayfinding tasks. However, the mixed and multi-dimensional information during the wayfinding tasks bring severe challenges to intelligent systems in detecting and nowcasting the attention of users. In this research, a virtual wayfinding experiment is designed to simulate the human response when subjects are memorizing or recalling different wayfinding information. Convolutional neural networks (CNNs) are designed for automated attention detection based on the power spectrum density of electroencephalography (EEG) data collected during the experiment. The performance of the personalized model and the generalized model are compared and the result shows a personalized CNN is a powerful classifier in detecting the attention of users with high accuracy and efficiency. The study thus will serve a foundation to support the future development of personalized cognition-driven intelligent systems. 
    more » « less
  2. Abstract Background Identification of reliable, affordable, and easy-to-use strategies for detection of dementia is sorely needed. Digital technologies, such as individual voice recordings, offer an attractive modality to assess cognition but methods that could automatically analyze such data are not readily available. Methods and findings We used 1264 voice recordings of neuropsychological examinations administered to participants from the Framingham Heart Study (FHS), a community-based longitudinal observational study. The recordings were 73 min in duration, on average, and contained at least two speakers (participant and examiner). Of the total voice recordings, 483 were of participants with normal cognition (NC), 451 recordings were of participants with mild cognitive impairment (MCI), and 330 were of participants with dementia (DE). We developed two deep learning models (a two-level long short-term memory (LSTM) network and a convolutional neural network (CNN)), which used the audio recordings to classify if the recording included a participant with only NC or only DE and to differentiate between recordings corresponding to those that had DE from those who did not have DE (i.e., NDE (NC+MCI)). Based on 5-fold cross-validation, the LSTM model achieved a mean (±std) area under the receiver operating characteristic curve (AUC) of 0.740 ± 0.017, mean balanced accuracy of 0.647 ± 0.027, and mean weighted F1 score of 0.596 ± 0.047 in classifying cases with DE from those with NC. The CNN model achieved a mean AUC of 0.805 ± 0.027, mean balanced accuracy of 0.743 ± 0.015, and mean weighted F1 score of 0.742 ± 0.033 in classifying cases with DE from those with NC. For the task related to the classification of participants with DE from NDE, the LSTM model achieved a mean AUC of 0.734 ± 0.014, mean balanced accuracy of 0.675 ± 0.013, and mean weighted F1 score of 0.671 ± 0.015. The CNN model achieved a mean AUC of 0.746 ± 0.021, mean balanced accuracy of 0.652 ± 0.020, and mean weighted F1 score of 0.635 ± 0.031 in classifying cases with DE from those who were NDE. Conclusion This proof-of-concept study demonstrates that automated deep learning-driven processing of audio recordings of neuropsychological testing performed on individuals recruited within a community cohort setting can facilitate dementia screening. 
    more » « less
  3. Abstract

    Machine learning has been increasingly applied to neuroimaging data to predict age, deriving a personalized biomarker with potential clinical applications. The scientific and clinical value of these models depends on their applicability to independently acquired scans from diverse sources. Accordingly, we evaluated the generalizability of two brain age models that were trained across the lifespan by applying them to three distinct early‐life samples with participants aged 8–22 years. These models were chosen based on the size and diversity of their training data, but they also differed greatly in their processing methods and predictive algorithms. Specifically, one brain age model was built by applying gradient tree boosting (GTB) to extracted features of cortical thickness, surface area, and brain volume. The other model applied a 2D convolutional neural network (DBN) to minimally preprocessed slices of T1‐weighted scans. Additional model variants were created to understand how generalizability changed when each model was trained with data that became more similar to the test samples in terms of age and acquisition protocols. Our results illustrated numerous trade‐offs. The GTB predictions were relatively more accurate overall and yielded more reliable predictions when applied to lower quality scans. In contrast, the DBN displayed the most utility in detecting associations between brain age gaps and cognitive functioning. Broadly speaking, the largest limitations affecting generalizability were acquisition protocol differences and biased brain age estimates. If such confounds could eventually be removed without post‐hoc corrections, brain age predictions may have greater utility as personalized biomarkers of healthy aging.

     
    more » « less
  4. Recognizing the affective state of children with autism spectrum disorder (ASD) in real-world settings poses challenges due to the varying head poses, illumination levels, occlusion and a lack of datasets annotated with emotions in in-the-wild scenarios. Understanding the emotional state of children with ASD is crucial for providing personalized interventions and support. Existing methods often rely on controlled lab environments, limiting their applicability to real-world scenarios. Hence, a framework that enables the recognition of affective states in children with ASD in uncontrolled settings is needed. This paper presents a framework for recognizing the affective state of children with ASD in an in-the-wild setting using heart rate (HR) information. More specifically, an algorithm is developed that can classify a participant’s emotion as positive, negative, or neutral by analyzing the heart rate signal acquired from a smartwatch. The heart rate data are obtained in real time using a smartwatch application while the child learns to code a robot and interacts with an avatar. The avatar assists the child in developing communication skills and programming the robot. In this paper, we also present a semi-automated annotation technique based on facial expression recognition for the heart rate data. The HR signal is analyzed to extract features that capture the emotional state of the child. Additionally, in this paper, the performance of a raw HR-signal-based emotion classification algorithm is compared with a classification approach based on features extracted from HR signals using discrete wavelet transform (DWT). The experimental results demonstrate that the proposed method achieves comparable performance to state-of-the-art HR-based emotion recognition techniques, despite being conducted in an uncontrolled setting rather than a controlled lab environment. The framework presented in this paper contributes to the real-world affect analysis of children with ASD using HR information. By enabling emotion recognition in uncontrolled settings, this approach has the potential to improve the monitoring and understanding of the emotional well-being of children with ASD in their daily lives.

     
    more » « less
  5. Cognitive processes do not occur by pure insertion and instead depend on the full complement of co-occurring mental processes, including perceptual and motor functions. As such, there is limited ecological validity to human neuroimaging experiments that use highly controlled tasks to isolate mental processes of interest. However, a growing literature shows how dynamic, interactive tasks have allowed researchers to study cognition as it more naturally occurs. Collective analysis across such neuroimaging experiments may answer broader questions regarding how naturalistic cognition is biologically distributed throughout the brain. We applied an unbiased, data-driven, meta-analytic approach that uses k-means clustering to identify core brain networks engaged across the naturalistic functional neuroimaging literature. Functional decoding allowed us to, then, delineate how information is distributed between these networks throughout the execution of dynamical cognition in realistic settings. This analysis revealed six recurrent patterns of brain activation, representing sensory, domain-specific, and attentional neural networks that support the cognitive demands of naturalistic paradigms. Although gaps in the literature remain, these results suggest that naturalistic fMRI paradigms recruit a common set of networks that allow both separate processing of different streams of information and integration of relevant information to enable flexible cognition and complex behavior. 
    more » « less