Although pain is widely recognized to be a multidimensional experience, it is typically measured by unidimensional patient self-reported visual analog scale (VAS). However, self-reported pain is subjective, difficult to interpret and sometimes impossible to obtain. Machine learning models have been developed to automatically recognize pain at both the frame level and sequence (or video) level. Many methods use or learn facial action units (AUs) defined by the Facial Action Coding System (FACS) for describing facial expressions with muscle movement. In this paper, we analyze the relationship between sequence-level multidimensional pain measurements and frame-level AUs and an AU derived pain-related measure, the Prkachin and Solomon Pain Intensity (PSPI). We study methods that learn sequence-level metrics from frame-level metrics. Specifically, we explore an extended multitask learning model to predict VAS from human-labeled AUs with the help of other sequence-level pain measurements during training. This model consists of two parts: a multitask learning neural network model to predict multidimensional pain scores, and an ensemble learning model to linearly combine the multidimensional pain scores to best approximate VAS. Starting from human-labeled AUs, the model achieves a mean absolute error (MAE) on VAS of 1.73. It outperforms provided human sequence-level estimates which have an MAE of 1.76. Combining our machine learning model with the human estimates gives the best performance of MAE on VAS of 1.48. 
                        more » 
                        « less   
                    
                            
                            Pain Action Unit Detection in Critically Ill Patients
                        
                    
    
            Existing pain assessment methods in the intensive care unit rely on patient self-report or visual observation by nurses. Patient self-report is subjective and can suffer from poor recall. In the case of non-verbal patients, behavioral pain assessment methods provide limited granularity, are subjective, and put additional burden on already overworked staff. Previous studies have shown the feasibility of autonomous pain expression assessment by detecting Facial Action Units (AUs). However, previous approaches for detecting facial pain AUs are historically limited to controlled environments. In this study, for the first time, we collected and annotated a pain-related AU dataset, Pain-ICU, containing 55,085 images from critically ill adult patients. We evaluated the performance of OpenFace, an open-source facial behavior analysis tool, and the trained AU R-CNN model on our Pain-ICU dataset. Variables such as assisted breathing devices, environmental lighting, and patient orientation with respect to the camera make AU detection harder than with controlled settings. Although OpenFace has shown state-of-the-art results in general purpose AU detection tasks, it could not accurately detect AUs in our Pain-ICU dataset (F1-score 0.42). To address this problem, we trained the AU R-CNN model on our Pain-ICU dataset, resulting in a satisfactory average F1-score 0.77. In this study, we show the feasibility of detecting facial pain AUs in uncontrolled ICU settings. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1750192
- PAR ID:
- 10314817
- Date Published:
- Journal Name:
- IEEE Computer Society Computers, Software, and Applications Conference (COMPSAC)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)Sleep has been shown to be an indispensable and important component of patients' recovery process. Nonetheless, the sleep quality of patients in the Intensive Care Unit (ICU) is often low, due to factors such as noise, pain, and frequent nursing care activities. Frequent sleep disruptions by the medical staff and/or visitors at certain times might lead to disruption of the patient's sleep-wake cycle and can also impact the severity of pain. Examining the association between sleep quality and frequent visitation has been difficult, due to the lack of automated methods for visitation detection. In this study, we recruited 38 patients to automatically assess visitation frequency from captured video frames. We used the DensePose R-CNN (ResNet-101) model to calculate the number of people in the room in a video frame. We examined when patients are interrupted the most, and we examined the association between frequent disruptions and patient outcomes on pain and length of stay.more » « less
- 
            Patients staying in the Intensive Care Unit (ICU) have a severely disrupted circadian rhythm. Due to patients' critical medical condition, ICU physicians and nurses have to provide round-the-clock clinical care, further disrupting patients' circadian rhythm. Mistimed family visits during rest-time can also disrupt patients' circadian rhythm. Currently, such effects are only reported based on hospital visitation policies rather than the actual number of visitors and care providers in the room. To quantify visitation disruptions, we used a deep Mask R-CNN model, a deep learning framework for object instance segmentation to detect and quantify the number of individuals in the ICU unit. This study represents the first effort to automatically quantify visitations in an ICU room, which could have implications in terms of policy adjustment, as well as circadian rhythm investigation. Our model achieved precision of 0.97 and recall of 0.67, with F1 score of 0.79 for detecting disruptions in the ICU units.more » « less
- 
            Currently, many critical care indices are repetitively assessed and recorded by overburdened nurses, e.g. physical function or facial pain expressions of nonverbal patients. In addition, many essential information on patients and their environment are not captured at all, or are captured in a non-granular manner, e.g. sleep disturbance factors such as bright light, loud background noise, or excessive visitations. In this pilot study, we examined the feasibility of using pervasive sensing technology and artificial intelligence for autonomous and granular monitoring of critically ill patients and their environment in the Intensive Care Unit (ICU). As an exemplar prevalent condition, we also characterized delirious and non-delirious patients and their environment. We used wearable sensors, light and sound sensors, and a high-resolution camera to collected data on patients and their environment. We analyzed collected data using deep learning and statistical analysis. Our system performed face detection, face recognition, facial action unit detection, head pose detection, facial expression recognition, posture recognition, actigraphy analysis, sound pressure and light level detection, and visitation frequency detection. We were able to detect patient's face (Mean average precision (mAP)=0.94), recognize patient's face (mAP=0.80), and their postures (F1=0.94). We also found that all facial expressions, 11 activity features, visitation frequency during the day, visitation frequency during the night, light levels, and sound pressure levels during the night were significantly different between delirious and non-delirious patients (p-value<0.05). In summary, we showed that granular and autonomous monitoring of critically ill patients and their environment is feasible and can be used for characterizing critical care conditions and related environment factors.more » « less
- 
            Stephanidis, Constantine; Chen, Jessie Y.; Fragomeni, Gino (Ed.)Post-traumatic stress disorder (PTSD) is a mental health condition affecting people who experienced a traumatic event. In addition to the clinical diagnostic criteria for PTSD, behavioral changes in voice, language, facial expression and head movement may occur. In this paper, we demonstrate how a machine learning model trained on a general population with self-reported PTSD scores can be used to provide behavioral metrics that could enhance the accuracy of the clinical diagnosis with patients. Both datasets were collected from a clinical interview conducted by a virtual agent (SimSensei) [10]. The clinical data was recorded from PTSD patients, who were victims of sexual assault, undergoing a VR exposure therapy. A recurrent neural network was trained on verbal, visual and vocal features to recognize PTSD, according to self-reported PCL-C scores [4]. We then performed decision fusion to fuse three modalities to recognize PTSD in patients with a clinical diagnosis, achieving an F1-score of 0.85. Our analysis demonstrates that machine-based PTSD assessment with self-reported PTSD scores can generalize across different groups and be deployed to assist diagnosis of PTSD.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    