Facial expression monitoring is crucial in fields including mental health care, driver assistant systems, and advertising. However, existing systems typically rely on cameras that capture entire faces, or contact-based bio-signal sensors, which are neither comfortable nor portable. In this demonstration, we present a wireless glasses system for non-contact facial expression monitoring. The system is composed of an IR camera and an embedded processing unit mounted on a 3D-printed glasses frame, and a novel data processing pipeline running across the glasses platform and a computer. Our system performs high-accuracy and real-time facial expression detection with a running time of up to 9 hours. We will show the fully-functioning wearable system in this demonstration. 
                        more » 
                        « less   
                    
                            
                            SPIDERS: Low-Cost Wireless Glasses for Continuous In-Situ Bio-Signal Acquisition and Emotion Recognition
                        
                    
    
            We present a System for Processing In-situ Bio-signal Data for Emotion Recognition and Sensing (SPIDERS)- a low-cost, wireless, glasses-based platform for continuous in-situ monitoring of user's facial expressions (apparent emotions) and real emotions. We present algorithms to provide four core functions (eye shape and eyebrow movements, pupillometry, zygomaticus muscle movements, and head movements), using the bio-signals acquired from three non-contact sensors (IR camera, proximity sensor, IMU). SPIDERS distinguishes between different classes of apparent and real emotion states based on the aforementioned four bio-signals. We prototype advanced functionalities including facial expression detection and real emotion classification with a landmarks and optical flow based facial expression detector that leverages changes in a user's eyebrows and eye shapes to achieve up to 83.87% accuracy, as well as a pupillometry-based real emotion classifier with higher accuracy than other low-cost wearable platforms that use sensors requiring skin contact. SPIDERS costs less than $20 to assemble and can continuously run for up to 9 hours before recharging. We demonstrate that SPIDERS is a truly wireless and portable platform that has the capability to impact a wide range of applications, where knowledge of the user's emotional state is critical. 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10168834
- Date Published:
- Journal Name:
- International Conference on Internet-of-Things Design and Implementation
- Page Range / eLocation ID:
- 27 to 39
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Li-Jessen, Nicole Yee-Key (Ed.)The Earable device is a behind-the-ear wearable originally developed to measure cognitive function. Since Earable measures electroencephalography (EEG), electromyography (EMG), and electrooculography (EOG), it may also have the potential to objectively quantify facial muscle and eye movement activities relevant in the assessment of neuromuscular disorders. As an initial step to developing a digital assessment in neuromuscular disorders, a pilot study was conducted to determine whether the Earable device could be utilized to objectively measure facial muscle and eye movements intended to be representative of Performance Outcome Assessments, (PerfOs) with tasks designed to model clinical PerfOs, referred to as mock-PerfO activities. The specific aims of this study were: To determine whether the Earable raw EMG, EOG, and EEG signals could be processed to extract features describing these waveforms; To determine Earable feature data quality, test re-test reliability, and statistical properties; To determine whether features derived from Earable could be used to determine the difference between various facial muscle and eye movement activities; and, To determine what features and feature types are important for mock-PerfO activity level classification. A total of N = 10 healthy volunteers participated in the study. Each study participant performed 16 mock-PerfOs activities, including talking, chewing, swallowing, eye closure, gazing in different directions, puffing cheeks, chewing an apple, and making various facial expressions. Each activity was repeated four times in the morning and four times at night. A total of 161 summary features were extracted from the EEG, EMG, and EOG bio-sensor data. Feature vectors were used as input to machine learning models to classify the mock-PerfO activities, and model performance was evaluated on a held-out test set. Additionally, a convolutional neural network (CNN) was used to classify low-level representations of the raw bio-sensor data for each task, and model performance was correspondingly evaluated and compared directly to feature classification performance. The model’s prediction accuracy on the Earable device’s classification ability was quantitatively assessed. Study results indicate that Earable can potentially quantify different aspects of facial and eye movements and may be used to differentiate mock-PerfO activities. Specially, Earable was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9. While EMG features contribute to classification accuracy for all tasks, EOG features are important for classifying gaze tasks. Finally, we found that analysis with summary features outperformed a CNN for activity classification. We believe Earable may be used to measure cranial muscle activity relevant for neuromuscular disorder assessment. Classification performance of mock-PerfO activities with summary features enables a strategy for detecting disease-specific signals relative to controls, as well as the monitoring of intra-subject treatment responses. Further testing is needed to evaluate the Earable device in clinical populations and clinical development settings.more » « less
- 
            In this paper, the authors explore different approaches to animating 3D facial emotions, some of which use manual keyframe animation and some of which use machine learning. To compare approaches the authors conducted an experiment consisting of side-by-side comparisons of animation clips generated by skeleton, blendshape, audio-driven, and vision-based capture facial animation techniques. Ninety-five participants viewed twenty face animation clips of characters expressing five distinct emotions (anger, sadness, happiness, fear, neutral), which were created using the four different facial animation techniques. After viewing each clip, the participants were asked to identify the emotions that the characters appeared to be conveying and rate their naturalness. Findings showed that the naturalness ratings of the happy emotion produced by the four methods tended to be consistent, whereas the naturalness ratings of the fear emotion created with skeletal animation were significantly higher than the other methods. Recognition of sad and neutral emotions were very low for all methods as compared to the other emotions. Overall, the skeleton approach had significantly higher ratings for naturalness and higher recognition rate than the other methods.more » « less
- 
            Abstract Electrooculography (EOG) is a method to record the electrical potential between the cornea and the retina of human eyes. Despite many applications of EOG in both research and medical diagnosis for many decades, state-of-the-art EOG sensors are still bulky, stiff, and uncomfortable to wear. Since EOG has to be measured around the eye, a prominent area for appearance with delicate skin, mechanically and optically imperceptible EOG sensors are highly desirable. Here, we report an imperceptible EOG sensor system based on noninvasive graphene electronic tattoos (GET), which are ultrathin, ultrasoft, transparent, and breathable. The GET EOG sensors can be easily laminated around the eyes without using any adhesives and they impose no constraint on blinking or facial expressions. High-precision EOG with an angular resolution of 4° of eye movement can be recorded by the GET EOG and eye movement can be accurately interpreted. Imperceptible GET EOG sensors have been successfully applied for human–robot interface (HRI). To demonstrate the functionality of GET EOG sensors for HRI, we connected GET EOG sensors to a wireless transmitter attached to the collar such that we can use eyeball movements to wirelessly control a quadcopter in real time.more » « less
- 
            We introduce a novel framework for emotional state detection from facial expression targeted to learning environments. Our framework is based on a convolutional deep neural network that classifies people’s emotions that are captured through a web-cam. For our classification outcome we adopt Russel’s model of core affect in which any particular emotion can be placed in one of four quadrants: pleasant-active, pleasant-inactive, unpleasant-active, and unpleasant-inactive. We gathered data from various datasets that were normalized and used to train the deep learning model. We use the fully-connected layers of the VGG_S network which was trained on human facial expressions that were manually labeled. We have tested our application by splitting the data into 80:20 and re-training the model. The overall test accuracy of all detected emotions was 66%. We have a working application that is capable of reporting the user emotional state at about five frames per second on a standard laptop computer with a web-cam. The emotional state detector will be integrated into an affective pedagogical agent system where it will serve as a feedback to an intelligent animated educational tutor.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    