Many people including those with visual impairment and blindness take advantage of video conferencing tools to meet people. Video conferencing tools enable them to share facial expressions that are considered as one of the most important aspects of human communication. This study aims to advance knowledge of how those with visual impairment and blindness share their facial expressions of emotions virtually. This study invited a convenience sample of 28 adults with visual impairment and blindness to Zoom video conferencing. The participants were instructed to pose facial expressions of basic human emotions (anger, fear, disgust, happiness, surprise, neutrality, calmness, and sadness), which were video recorded. The facial expressions were analyzed using the Facial Action Coding System (FACS) that encodes the movement of specific facial muscles called Action Units (AUs). This study found that there was a particular set of AUs significantly engaged in expressing each emotion, except for sadness. Individual differences were also found in AUs influenced by the participants’ visual acuity levels and emotional characteristics such as valence and arousal levels. The research findings are anticipated to serve as the foundation of knowledge, contributing to developing emotion-sensing technologies for those with visual impairment and blindness.
Recognition of Tactile Facial Action Units by Individuals Who Are Blind and Sighted: A Comparative Study
Given that most cues exchanged during a social interaction are nonverbal (e.g., facial expressions, hand gestures, body language), individuals who are blind are at a social disadvantage compared to their sighted peers. Very little work has explored sensory augmentation in the context of social assistive aids for individuals who are blind. The purpose of this study is to explore the following questions related to visual-to-vibrotactile mapping of facial action units (the building blocks of facial expressions): (1) How well can individuals who are blind recognize tactile facial action units compared to those who are sighted? (2) How well can individuals who are blind recognize emotions from tactile facial action units compared to those who are sighted? These questions are explored in a preliminary pilot test using absolute identification tasks in which participants learn and recognize vibrotactile stimulations presented through the Haptic Chair, a custom vibrotactile display embedded on the back of a chair. Study results show that individuals who are blind are able to recognize tactile facial action units as well as those who are sighted. These results hint at the potential for tactile facial action units to augment and expand access to social interactions for individuals who are blind.
more »
« less
- Award ID(s):
- 1828010
- PAR ID:
- 10108655
- Date Published:
- Journal Name:
- Multimodal Technologies and Interaction
- Volume:
- 3
- Issue:
- 2
- ISSN:
- 2414-4088
- Page Range / eLocation ID:
- 32
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Facial expressions of emotions by people with visual impairment and blindness via video conferencing
-
Scientific disciplines spanning biology, biochemistry, and biophysics involve the study of proteins and their functions. Visualization of protein structures represents a barrier to education and research in these disciplines for students who are blind or visually impaired. Here, we present a software plugin for readily producing variable-height tactile graphics of proteins using the free biomolecular visualization software Visual Molecular Dynamics (VMD) and protein structure data that is publicly available through the Protein Data Bank. Our method also supports interactive tactile visualization of proteins with VMD on electronic refreshable tactile display devices. Employing our method in an academic laboratory has enabled an undergraduate student who is blind to carry out research alongside her sighted peers. By making the study of protein structures accessible to students who are blind or visually impaired, we aim to promote diversity and inclusion in STEM education and research.more » « less
-
People can visualize their spontaneous and voluntary emotions via facial expressions, which play a critical role in social interactions. However, less is known about mechanisms of spontaneous emotion expressions, especially in adults with visual impairment and blindness. Nineteen adults with visual impairment and blindness participated in interviews where the spontaneous facial expressions were observed and analyzed via the Facial Action Coding System (FACS). We found a set of Action Units, primarily engaged in expressing the spontaneous emotions, which were likely to be affected by participants’ different characteristics. The results of this study could serve as evidence to suggest that adults with visual impairment and blindness show individual differences in spontaneous facial expressions of emotions.more » « less
-
null ; null ; null ; null (Ed.)For the significant global population of individuals who are blind or visually impaired, spatial awareness during navigation remains a challenge. Tactile Electronic Travel Aids have been designed to assist with the provision of spatiotemporal information, but an intuitive method for mapping this information to patterns on a vibrotactile display remains to be determined. This paper explores the encoding of distance from a navigator to an object using two strategies: absolute and relative. A wearable prototype, the HapBack, is presented with two straps of vertically aligned vibrotactile motors mapped to five distances, with each distance mapped to a row on the display. Absolute patterns emit a single vibration at the row corresponding to a distance, while relative patterns emit a sequence of vibrations starting from the bottom row and ending at the row mapped to that distance. These two encoding strategies are comparatively evaluated for identification accuracy and perceived intuitiveness of mapping among ten adult participants who are blind or visually impaired. No significant difference was found between the intuitiveness of the two encodings based on these metrics, with each showing promising results for application during navigation tasks.more » « less
-
Nonverbal communication, such as body language, facial expressions, and hand gestures, is crucial to human communication as it conveys more information about emotions and attitudes than spoken words. However, individuals who are blind or have low-vision (BLV) may not have access to this method of communication, leading to asymmetry in conversations. Developing systems to recognize nonverbal communication cues (NVCs) for the BLV community would enhance communication and understanding for both parties. This paper focuses on developing a multimodal computer vision system to recognize and detect NVCs. To accomplish our objective, we are collecting a dataset focused on nonverbal communication cues. Here, we propose a baseline model for recognizing NVCs and present initial results on the Aff-Wild2 dataset. Our baseline model achieved an accuracy of 68% and a F1-Score of 64% on the Aff-Wild2 validation set, making it comparable with previous state of the art results. Furthermore, we discuss the various challenges associated with NVC recognition as well as the limitations of our current work.more » « less