Wearable devices for continuous health monitoring in humans are constantly evolving, yet the signal quality may be improved by optimizing electrode placement. While the commonly used locations to measure electrodermal activity (EDA) are at the fingers or the wrist, alternative locations, such as the torso, need to be considered when applying an integrated multimodal approach of concurrently recording multiple bio-signals, such as the monitoring of visceral pain symptoms like those related to irritable bowel syndrome (IBS). This study aims to quantitatively determine the EDA signal quality at four torso locations (mid-chest, upper abdomen, lower back, and mid-back) in comparison to EDA signals recorded from the fingers. Concurrent EDA signals from five body locations were collected from twenty healthy participants as they completed a Stroop Task and a Cold Pressor task that elicited salient autonomic responses. Mean skin conductance (meanSCL), non-specific skin conductance responses (NS.SCRs), and sympathetic response (TVSymp) were derived from the torso EDA signals and compared with signals from the fingers. Notably, TVSymp recorded from the mid-chest location showed significant changes between baseline and Stroop phase, consistent with the TVSymp recorded from the fingers. A high correlation (0.77–0.83) was also identified between TVSymp recorded from the fingers and three torso locations: mid-chest, upper abdomen, and lower back locations. While the fingertips remain the optimal site for EDA measurement, the mid-chest exhibited the strongest potential as an alternative recording site, with the upper abdomen and lower back also demonstrating promising results. These findings suggest that torso-based EDA measurements have the potential to provide reliable measurement of sympathetic neural activities and may be incorporated into a wearable belt system for multimodal monitoring. 
                        more » 
                        « less   
                    
                            
                            MOCAS: A Multimodal Dataset for Objective Cognitive Workload Assessment on Simultaneous Tasks
                        
                    
    
            This paper presents MOCAS, a multimodal dataset dedicated for human cognitive workload (CWL) assessment. In contrast to existing datasets based on virtual game stimuli, the data in MOCAS was collected from realistic closed-circuit television (CCTV) monitoring tasks, increasing its applicability for real-world scenarios. To build MOCAS, two off-the-shelf wearable sensors and one webcam were utilized to collect physiological signals and behavioral features from 21 human subjects. After each task, participants reported their CWL by completing the NASA-Task Load Index (NASA-TLX) and Instantaneous Self-Assessment (ISA). Personal background (e.g., personality and prior experience) was surveyed using demographic and Big Five Factor personality questionnaires, and two domains of subjective emotion information (i.e., arousal and valence) were obtained from the Self-Assessment Manikin (SAM), which could serve as potential indicators for improving CWL recognition performance. Technical validation was conducted to demonstrate that target CWL levels were elicited during simultaneous CCTV monitoring tasks; its results support the high quality of the collected multimodal signals. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1846221
- PAR ID:
- 10595337
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Transactions on Affective Computing
- Volume:
- 16
- Issue:
- 1
- ISSN:
- 2371-9850
- Page Range / eLocation ID:
- 116 to 132
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            A multimodal dataset is presented for the cognitive fatigue assessment of physiological minimally invasive sensory data of Electrocardiography (ECG) and Electrodermal Activity (EDA) and self-reporting scores of cognitive fatigue during HRI. Data were collected from 16 non-STEM participants, up to three visits each, during which the subjects interacted with a robot to prepare a meal and get ready for work. For some of the visits, a well-established cognitive test was used to induce cognitive fatigue. The developed cognitive fatigue assessment framework filtered noise from the raw signals, extracted relevant features, and applied machine learning regression algorithms, such as Support Vector Regression (SVR), Gradient Boosting Machine (GBM), and Random Forest Regressor (RFR) for estimating the Cognitive Fatigue (CF) level.more » « less
- 
            High-stress environments, such as a NASA Control Room, require optimal task performance, as a single mistake may cause monetary loss or the loss of human life. Robots can partner with humans in a collaborative or supervisory paradigm. Such teaming paradigms require the robot to appropriately interact with the human without decreasing either»s task performance. Workload is directly correlated with task performance; thus, a robot may use a human»s workload state to modify its interactions with the human. A diagnostic workload assessment algorithm that accurately estimates workload using results from two evaluations, one peer-based and one supervisory-based, is presented.more » « less
- 
            The interaction and collaboration between humans and multiple robots represent a novel field of research known as human multirobot systems. Adequately designed systems within this field allow teams composed of both humans and robots to work together effectively on tasks, such as monitoring, exploration, and search and rescue operations. This article presents a deep reinforcement learning-based affective workload allocation controller specifically for multihuman multirobot teams. The proposed controller can dynamically reallocate workloads based on the performance of the operators during collaborative missions with multirobot systems. The operators' performances are evaluated through the scores of a self-reported questionnaire (i.e., subjective measurement) and the results of a deep learning-based cognitive workload prediction algorithm that uses physiological and behavioral data (i.e., objective measurement). To evaluate the effectiveness of the proposed controller, we conduct an exploratory user experiment with various allocation strategies. The user experiment uses a multihuman multirobot CCTV monitoring task as an example and carry out comprehensive real-world experiments with 32 human subjects for both quantitative measurement and qualitative analysis. Our results demonstrate the performance and effectiveness of the proposed controller and highlight the importance of incorporating both subjective and objective measurements of the operators' cognitive workload as well as seeking consent for workload transitions, to enhance the performance of multihuman multirobot teams.more » « less
- 
            Human state recognition is a critical topic with pervasive and important applications in human–machine systems. Multimodal fusion, which entails integrating metrics from various data sources, has proven to be a potent method for boosting recognition performance. Although recent multimodal-based models have shown promising results, they often fall short in fully leveraging sophisticated fusion strategies essential for modeling adequate cross-modal dependencies in the fusion representation. Instead, they rely on costly and inconsistent feature crafting and alignment. To address this limitation, we propose an end-to-end multimodal transformer framework for multimodal human state recognition called Husformer. Specifically, we propose using cross-modal transformers, which inspire one modality to reinforce itself through directly attending to latent relevance revealed in other modalities, to fuse different modalities while ensuring sufficient awareness of the cross-modal interactions introduced. Subsequently, we utilize a self-attention transformer to further prioritize contextual information in the fusion representation. Extensive experiments on two human emotion corpora (DEAP and WESAD) and two cognitive load datasets [multimodal dataset for objective cognitive workload assessment on simultaneous tasks (MOCAS) and CogLoad] demonstrate that in the recognition of the human state, our Husformer outperforms both state-of-the-art multimodal baselines and the use of a single modality by a large margin, especially when dealing with raw multimodal features. We also conducted an ablation study to show the benefits of each component in Husformer. Experimental details and source code are available at https://github.com/SMARTlab-Purdue/Husformer.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    