skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Demo Abstract: Wireless Glasses for Non-contact Facial Expression Monitoring
Facial expression monitoring is crucial in fields including mental health care, driver assistant systems, and advertising. However, existing systems typically rely on cameras that capture entire faces, or contact-based bio-signal sensors, which are neither comfortable nor portable. In this demonstration, we present a wireless glasses system for non-contact facial expression monitoring. The system is composed of an IR camera and an embedded processing unit mounted on a 3D-printed glasses frame, and a novel data processing pipeline running across the glasses platform and a computer. Our system performs high-accuracy and real-time facial expression detection with a running time of up to 9 hours. We will show the fully-functioning wearable system in this demonstration.  more » « less
Award ID(s):
1943396 1815274 1704899
PAR ID:
10168842
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
ACM/IEEE International Conference on Information Processing in Sensor Networks
Page Range / eLocation ID:
367 to 368
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a System for Processing In-situ Bio-signal Data for Emotion Recognition and Sensing (SPIDERS)- a low-cost, wireless, glasses-based platform for continuous in-situ monitoring of user's facial expressions (apparent emotions) and real emotions. We present algorithms to provide four core functions (eye shape and eyebrow movements, pupillometry, zygomaticus muscle movements, and head movements), using the bio-signals acquired from three non-contact sensors (IR camera, proximity sensor, IMU). SPIDERS distinguishes between different classes of apparent and real emotion states based on the aforementioned four bio-signals. We prototype advanced functionalities including facial expression detection and real emotion classification with a landmarks and optical flow based facial expression detector that leverages changes in a user's eyebrows and eye shapes to achieve up to 83.87% accuracy, as well as a pupillometry-based real emotion classifier with higher accuracy than other low-cost wearable platforms that use sensors requiring skin contact. SPIDERS costs less than $20 to assemble and can continuously run for up to 9 hours before recharging. We demonstrate that SPIDERS is a truly wireless and portable platform that has the capability to impact a wide range of applications, where knowledge of the user's emotional state is critical. 
    more » « less
  2. Abstract Though the fusiform is well-established as a key node in the face perception network, its role in facial expression processing remains unclear, due to competing models and discrepant findings. To help resolve this debate, we recorded from 17 subjects with intracranial electrodes implanted in face sensitive patches of the fusiform. Multivariate classification analysis showed that facial expression information is represented in fusiform activity and in the same regions that represent identity, though with a smaller effect size. Examination of the spatiotemporal dynamics revealed a functional distinction between posterior fusiform and midfusiform expression coding, with posterior fusiform showing an early peak of facial expression sensitivity at around 180 ms after subjects viewed a face and midfusiform showing a later and extended peak between 230 and 460 ms. These results support the hypothesis that the fusiform plays a role in facial expression perception and highlight a qualitative functional distinction between processing in posterior fusiform and midfusiform, with each contributing to temporally segregated stages of expression perception. 
    more » « less
  3. The aim of this demo is to explore the implementation of an ultra-low-power analog-to-feature ASIC to an IoT embedded system. The custom integrated circuit, designed to optimize the power consumption of a traditional sound-source localization systems, is capable of extracting the time-difference of arrival (TDoA) between 4 microphones consuming only 78.2nW. An end-to-end embedded system is presented; a microphone array is connected to the ASIC that converts the TDoA to digital information and sends it to a host computer. A machine-learning algorithm, running in the host, is then used to detect the bearing of the sound-source. During the demonstration, the audience is able to verify the benefits and drawbacks of the custom integrated circuit solution, both in the perspective of the signal-processing performance of the ASIC, and the impact it introduces to the complexity of the system’s integration. 
    more » « less
  4. This demonstration paper introduces MalViz, a visual analytic tool for analyzing malware behavioral patterns through process monitoring events. The goals of this tool are: 1) to investigate the relationship and dependencies among processes interacted with a running malware over a certain period of time, 2) to support professional security experts in detecting and recognizing unusual signature-based patterns exhibited by a running malware, and 3) to help users identify infected system and users’ libraries that the malware has reached and possibly tampered. A case study is conducted in a virtual machine environment with a sample of four malware programs. The result of the case study shows that the visualization tool offers a great support for experts in software and system analysis and digital forensics to profile and observe malicious behavior and further identify the traces of affected software artifacts. 
    more » « less
  5. C-Auth is a novel authentication method for smart glasses that explores the feasibility of authenticating users using the facial contour lines from the nose and cheeks captured by a down-facing camera in the middle of the glasses. To evaluate the system, we conducted a user study with 20 participants in three sessions on different days. Our system correctly authenticates the target participant versus the other 19 participants (attackers) with a true positive rate of 98.0% (SD: 2.96%) and a false positive rate of 4.97% (2.88 %) across all three days. We conclude by discussing current limitations, challenges, and potential future applications for C-Auth. 
    more » « less