skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Exploring Multidimensional Measurements for Pain Evaluation using Facial Action Units
Although pain is widely recognized to be a multidimensional experience, it is typically measured by unidimensional patient self-reported visual analog scale (VAS). However, self-reported pain is subjective, difficult to interpret and sometimes impossible to obtain. Machine learning models have been developed to automatically recognize pain at both the frame level and sequence (or video) level. Many methods use or learn facial action units (AUs) defined by the Facial Action Coding System (FACS) for describing facial expressions with muscle movement. In this paper, we analyze the relationship between sequence-level multidimensional pain measurements and frame-level AUs and an AU derived pain-related measure, the Prkachin and Solomon Pain Intensity (PSPI). We study methods that learn sequence-level metrics from frame-level metrics. Specifically, we explore an extended multitask learning model to predict VAS from human-labeled AUs with the help of other sequence-level pain measurements during training. This model consists of two parts: a multitask learning neural network model to predict multidimensional pain scores, and an ensemble learning model to linearly combine the multidimensional pain scores to best approximate VAS. Starting from human-labeled AUs, the model achieves a mean absolute error (MAE) on VAS of 1.73. It outperforms provided human sequence-level estimates which have an MAE of 1.76. Combining our machine learning model with the human estimates gives the best performance of MAE on VAS of 1.48.  more » « less
Award ID(s):
1817226
PAR ID:
10187072
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)
Page Range / eLocation ID:
559-565
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Speech emotion recognition (SER) is a challenging task due to the limited availability of real-world labeled datasets. Since it is easier to find unlabeled data, the use of self-supervised learning (SSL) has become an attractive alternative. This study proposes new pre-text tasks for SSL to improve SER. While our target application is SER, the proposed pre-text tasks include audio-visual formulations, leveraging the relationship between acoustic and facial features. Our proposed approach introduces three new unimodal and multimodal pre-text tasks that are carefully designed to learn better representations for predicting emotional cues from speech. Task 1 predicts energy variations (high or low) from a speech sequence. Task 2 uses speech features to predict facial activation (high or low) based on facial landmark movements. Task 3 performs a multi-class emotion recognition task on emotional labels obtained from combinations of action units (AUs) detected across a video sequence. We pre-train a network with 60.92 hours of unlabeled data, fine-tuning the model for the downstream SER task. The results on the CREMA-D dataset show that the model pre-trained on the proposed domain-specific pre-text tasks significantly improves the precision (up to 5.1%), recall (up to 4.5%), and F1-scores (up to 4.9%) of our SER system. 
    more » « less
  2. Existing pain assessment methods in the intensive care unit rely on patient self-report or visual observation by nurses. Patient self-report is subjective and can suffer from poor recall. In the case of non-verbal patients, behavioral pain assessment methods provide limited granularity, are subjective, and put additional burden on already overworked staff. Previous studies have shown the feasibility of autonomous pain expression assessment by detecting Facial Action Units (AUs). However, previous approaches for detecting facial pain AUs are historically limited to controlled environments. In this study, for the first time, we collected and annotated a pain-related AU dataset, Pain-ICU, containing 55,085 images from critically ill adult patients. We evaluated the performance of OpenFace, an open-source facial behavior analysis tool, and the trained AU R-CNN model on our Pain-ICU dataset. Variables such as assisted breathing devices, environmental lighting, and patient orientation with respect to the camera make AU detection harder than with controlled settings. Although OpenFace has shown state-of-the-art results in general purpose AU detection tasks, it could not accurately detect AUs in our Pain-ICU dataset (F1-score 0.42). To address this problem, we trained the AU R-CNN model on our Pain-ICU dataset, resulting in a satisfactory average F1-score 0.77. In this study, we show the feasibility of detecting facial pain AUs in uncontrolled ICU settings. 
    more » « less
  3. null (Ed.)
    Pain is a personal, subjective experience, and the current gold standard to evaluate pain is the Visual Analog Scale (VAS), which is self-reported at the video level. One problem with the current automated pain detection systems is that the learned model doesn’t generalize well to unseen subjects. In this work, we propose to improve pain detection in facial videos using individual models and uncertainty estimation. For a new test video, we jointly consider which individual models generalize well generally, and which individual models are more similar/accurate to this test video, in order to choose the optimal combination of individual models and get the best performance on new test videos. We show on the UNBCMcMaster Shoulder Pain Dataset that our method significantly improves the previous state-of-the-art performance. 
    more » « less
  4. Le, Khanh N.Q. (Ed.)
    In current clinical settings, typically pain is measured by a patient’s self-reported information. This subjective pain assessment results in suboptimal treatment plans, over-prescription of opioids, and drug-seeking behavior among patients. In the present study, we explored automatic objective pain intensity estimation machine learning models using inputs from physiological sensors. This study uses BioVid Heat Pain Dataset. We extracted features from Electrodermal Activity (EDA), Electrocardiogram (ECG), Electromyogram (EMG) signals collected from study participants subjected to heat pain. We built different machine learning models, including Linear Regression, Support Vector Regression (SVR), Neural Networks and Extreme Gradient Boosting for continuous value pain intensity estimation. Then we identified the physiological sensor, feature set and machine learning model that give the best predictive performance. We found that EDA is the most information-rich sensor for continuous pain intensity prediction. A set of only 3 features from EDA signals using SVR model gave an average performance of 0.93 mean absolute error (MAE) and 1.16 root means square error (RMSE) for the subject-independent model and of 0.92 MAE and 1.13 RMSE for subject-dependent. The MAE achieved with signal-feature-model combination is less than 1 unit on 0 to 4 continues pain scale, which is smaller than the MAE achieved by the methods reported in the literature. These results demonstrate that it is possible to estimate pain intensity of a patient using a computationally inexpensive machine learning model with 3 statistical features from EDA signal which can be collected from a wrist biosensor. This method paves a way to developing a wearable pain measurement device. 
    more » « less
  5. Background: Early-life pain is associated with adverse neurodevelopmental consequences; and current pain assessment practices are discontinuous, inconsistent, and highly dependent on nurses’ availability. Furthermore, facial expressions in commonly used pain assessment tools are not associated with brain-based evidence of pain. Purpose: To develop and validate a machine learning (ML) model to classify pain. Methods: In this retrospective validation study, using a human-centered design for Embedded Machine Learning Solutions approach and the Neonatal Facial Coding System (NFCS), 6 experienced neonatal intensive care unit (NICU) nurses labeled data from randomly assigned iCOPEvid (infant Classification Of Pain Expression video) sequences of 49 neonates undergoing heel lance. NFCS is the only observational pain assessment tool associated with brain-based evidence of pain. A standard 70% training and 30% testing split of the data was used to train and test several ML models. NICU nurses’ interrater reliability was evaluated, and NICU nurses’ area under the receiver operating characteristic curve (AUC) was compared with the ML models’ AUC. Results: Nurses weighted mean interrater reliability was 68% (63%-79%) for NFCS tasks, 77.7% (74%-83%) for pain intensity, and 48.6% (15%-59%) for frame and 78.4% (64%-100%) for video pain classification, with AUC of 0.68. The best performing ML model had 97.7% precision, 98% accuracy, 98.5% recall, and AUC of 0.98. Implications for Practice and Research: The pain classification ML model AUC far exceeded that of NICU nurses for identifying neonatal pain. These findings will inform the development of a continuous, unbiased, brain-based, nurse-in-the-loop Pain Recognition Automated Monitoring System (PRAMS) for neonates and infants. 
    more » « less