BACKGROUND Facial expressions are critical for conveying emotions and facilitating social interaction. Yet, little is known about how accurately sighted individuals recognize emotions facially expressed by people with visual impairments in online communication settings. OBJECTIVE This study aimed to investigate sighted individuals’ ability to understand facial expressions of six basic emotions in people with visual impairments during Zoom calls. It also aimed to examine whether education on facial expressions specific to people with visual impairments would improve emotion recognition accuracy. METHODS Sighted participants viewed video clips of individuals with visual impairments displaying facial expressions. They then identified the emotions displayed. Next, they received an educational session on facial expressions specific to people with visual impairments, addressing unique characteristics and potential misinterpretations. After education, participants viewed another set of video clips and again identified the emotions displayed. RESULTS Before education, participants frequently misidentified emotions. After education, their accuracy in recognizing emotions improved significantly. CONCLUSIONS This study provides evidence that education on facial expressions of people with visual impairments can significantly enhance sighted individuals’ ability to accurately recognize emotions in online settings. This improved accuracy has the potential to foster more inclusive and effective online interactions between people with and without visual disabilities.
more »
« less
Modeling and Synthesizing Idiopathic Facial Paralysis
Over 22 million people worldwide are affected by Parkinson's disease, stroke, and Bell's palsy (BP), which can cause facial paralysis (FP). People with FP have trouble having their expressions understood: both laypersons and clinicians have difficulty understanding them and often misinterpret them, which can result in poor social interactions and poor care delivery. One way to address this problem is through better education and training, of which computational tools may prove invaluable. Thus, in this paper, we explore how to build systems that can recognize and synthesize asymmetrical facial expressions. We introduce a novel computational model of asymmetric facial expressions for BP, which we can synthesize on either virtual and robotic patient simulators. We explore this within the context of clinical education, and built a patient simulator with synthesized FP in order to help clinicians perceive facial paralysis in patients. We conducted both computational and human-focused evaluations of the model, including the feedback from clinical experts. Our results suggest that our BP model is realistic, and comparable to the expressions of people with BP. Thus, this work has the potential to provide a practical training tool for clinical learners to better understand the expressions of people with BP. Our work can also help researchers in the facial recognition community to explore new methods for asymmetric facial expression analysis and synthesis.
more »
« less
- Award ID(s):
- 1820085
- PAR ID:
- 10145848
- Date Published:
- Journal Name:
- 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019)
- Page Range / eLocation ID:
- 1 to 8
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Clinical educators have used robotic and virtual patient simulator systems (RPS) for dozens of years, to help clinical learners (CL) gain key skills to help avoid future patient harm. These systems can simulate human physiological traits; however, they have static faces and lack the realistic depiction of facial cues, which limits CL engagement and immersion. In this article, we provide a detailed review of existing systems in use, as well as describe the possibilities for new technologies from the human–robot interaction and intelligent virtual agents communities to push forward the state of the art. We also discuss our own work in this area, including new approaches for facial recognition and synthesis on RPS systems, including the ability to realistically display patient facial cues such as pain and stroke. Finally, we discuss future research directions for the field.more » « less
-
Although pain is widely recognized to be a multidimensional experience, it is typically measured by unidimensional patient self-reported visual analog scale (VAS). However, self-reported pain is subjective, difficult to interpret and sometimes impossible to obtain. Machine learning models have been developed to automatically recognize pain at both the frame level and sequence (or video) level. Many methods use or learn facial action units (AUs) defined by the Facial Action Coding System (FACS) for describing facial expressions with muscle movement. In this paper, we analyze the relationship between sequence-level multidimensional pain measurements and frame-level AUs and an AU derived pain-related measure, the Prkachin and Solomon Pain Intensity (PSPI). We study methods that learn sequence-level metrics from frame-level metrics. Specifically, we explore an extended multitask learning model to predict VAS from human-labeled AUs with the help of other sequence-level pain measurements during training. This model consists of two parts: a multitask learning neural network model to predict multidimensional pain scores, and an ensemble learning model to linearly combine the multidimensional pain scores to best approximate VAS. Starting from human-labeled AUs, the model achieves a mean absolute error (MAE) on VAS of 1.73. It outperforms provided human sequence-level estimates which have an MAE of 1.76. Combining our machine learning model with the human estimates gives the best performance of MAE on VAS of 1.48.more » « less
-
Conceptualizing Machine Learning for Dynamic Information Retrieval of Electronic Health Record NotesThe large amount of time clinicians spend sifting through patient notes and documenting in electronic health records (EHRs) is a leading cause of clinician burnout. By proactively and dynamically retrieving relevant notes during the documentation process, we can reduce the effort required to find relevant patient history. In this work, we conceptualize the use of EHR audit logs for machine learning as a source of supervision of note relevance in a specific clinical context, at a particular point in time. Our evaluation focuses on the dynamic retrieval in the emergency department, a high acuity setting with unique patterns of information retrieval and note writing. We show that our methods can achieve an AUC of 0.963 for predicting which notes will be read in an individual note writing session. We additionally conduct a user study with several clinicians and find that our framework can help clinicians retrieve relevant information more efficiently. Demonstrating that our framework and methods can perform well in this demanding setting is a promising proof of concept that they will translate to other clinical settings and data modalities (e.g., labs, medications, imaging).more » « less
-
Facial expressions of emotions by people with visual impairment and blindness via video conferencingMany people including those with visual impairment and blindness take advantage of video conferencing tools to meet people. Video conferencing tools enable them to share facial expressions that are considered as one of the most important aspects of human communication. This study aims to advance knowledge of how those with visual impairment and blindness share their facial expressions of emotions virtually. This study invited a convenience sample of 28 adults with visual impairment and blindness to Zoom video conferencing. The participants were instructed to pose facial expressions of basic human emotions (anger, fear, disgust, happiness, surprise, neutrality, calmness, and sadness), which were video recorded. The facial expressions were analyzed using the Facial Action Coding System (FACS) that encodes the movement of specific facial muscles called Action Units (AUs). This study found that there was a particular set of AUs significantly engaged in expressing each emotion, except for sadness. Individual differences were also found in AUs influenced by the participants’ visual acuity levels and emotional characteristics such as valence and arousal levels. The research findings are anticipated to serve as the foundation of knowledge, contributing to developing emotion-sensing technologies for those with visual impairment and blindness.more » « less
An official website of the United States government

