skip to main content


Title: Texture Feature Extraction From Free-Viewing Scan Paths Using Gabor Filters With Downsampling
Texture-based features computed on eye movement scan paths have recently been proposed for eye movement biometric applications. Feature vectors were extracted within this prior work by computing the mean and standard deviation of the resulting images obtained through application of a Gabor filter bank. This paper describes preliminary work exploring an alternative technique for extracting features from Gabor filtered scan path images. Namely, features vectors are obtained by downsampling the filtered images, thereby retaining structured spatial information within the feature vector. The proposed technique is validated at various downsampling scales for data collected from 94 subjects during free-viewing of a fantasy movie trailer. The approach is demonstrated to reduce EER versus the previously proposed statistical summary technique by 11.7% for the best evaluated downsampling parameter.  more » « less
Award ID(s):
1714623
NSF-PAR ID:
10285746
Author(s) / Creator(s):
;
Date Published:
Journal Name:
ACM Symposium on Eye Tracking Research and Applications
Page Range / eLocation ID:
1 to 3
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Metric learning is a valuable technique for enabling the ongoing enrollment of new users within biometric systems. While this approach has been heavily employed for other biometric modalities such as facial recognition, applications to eye movements have only recently been explored. This manuscript further investigates the application of metric learning to eye movement biometrics. A set of three multilayer perceptron networks are trained for embedding feature vectors describing three classes of eye movements: fixations, saccades, and post-saccadic oscillations. The network is validated on a dataset containing eye movement traces of 269 subjects recorded during a reading task. The proposed algorithm is benchmarked against a previously introduced statistical biometric approach. While mean equal error rate (EER) was increased versus the benchmark method, the proposed technique demonstrated lower dispersion in EER across the four test folds considered herein. 
    more » « less
  2. Li-Jessen, Nicole Yee-Key (Ed.)
    The Earable device is a behind-the-ear wearable originally developed to measure cognitive function. Since Earable measures electroencephalography (EEG), electromyography (EMG), and electrooculography (EOG), it may also have the potential to objectively quantify facial muscle and eye movement activities relevant in the assessment of neuromuscular disorders. As an initial step to developing a digital assessment in neuromuscular disorders, a pilot study was conducted to determine whether the Earable device could be utilized to objectively measure facial muscle and eye movements intended to be representative of Performance Outcome Assessments, (PerfOs) with tasks designed to model clinical PerfOs, referred to as mock-PerfO activities. The specific aims of this study were: To determine whether the Earable raw EMG, EOG, and EEG signals could be processed to extract features describing these waveforms; To determine Earable feature data quality, test re-test reliability, and statistical properties; To determine whether features derived from Earable could be used to determine the difference between various facial muscle and eye movement activities; and, To determine what features and feature types are important for mock-PerfO activity level classification. A total of N = 10 healthy volunteers participated in the study. Each study participant performed 16 mock-PerfOs activities, including talking, chewing, swallowing, eye closure, gazing in different directions, puffing cheeks, chewing an apple, and making various facial expressions. Each activity was repeated four times in the morning and four times at night. A total of 161 summary features were extracted from the EEG, EMG, and EOG bio-sensor data. Feature vectors were used as input to machine learning models to classify the mock-PerfO activities, and model performance was evaluated on a held-out test set. Additionally, a convolutional neural network (CNN) was used to classify low-level representations of the raw bio-sensor data for each task, and model performance was correspondingly evaluated and compared directly to feature classification performance. The model’s prediction accuracy on the Earable device’s classification ability was quantitatively assessed. Study results indicate that Earable can potentially quantify different aspects of facial and eye movements and may be used to differentiate mock-PerfO activities. Specially, Earable was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9. While EMG features contribute to classification accuracy for all tasks, EOG features are important for classifying gaze tasks. Finally, we found that analysis with summary features outperformed a CNN for activity classification. We believe Earable may be used to measure cranial muscle activity relevant for neuromuscular disorder assessment. Classification performance of mock-PerfO activities with summary features enables a strategy for detecting disease-specific signals relative to controls, as well as the monitoring of intra-subject treatment responses. Further testing is needed to evaluate the Earable device in clinical populations and clinical development settings. 
    more » « less
  3. Abstract

    Electron backscattering diffraction provides the analysis of crystalline phases at large scales (microns) while precession electron diffraction may be used to get 4D‐STEM data to elucidate structure at nanometric resolution. Both are limited by the probe size and also exhibit some difficulties for the generation of large datasets, given the inherent complexity of image acquisition. The latter appoints the application of advanced machine learning techniques, such as deep learning adapted for several tasks, including pattern matching, image segmentation, etc. This research aims to show how Gabor filters provide an appropriate feature extraction technique for electron microscopy images that could prevent the need of large volumes of data to train deep learning models. The work presented herein combines an algorithm based on Gabor filters for feature extraction and an unsupervised learning method to perform particle segmentation of polyhedral metallic nanoparticles and crystal orientation mapping at atomic scale. Experimental results have shown that Gabor filters are convenient for electron microscopy images analysis, that even a nonsupervised learning algorithm can provide remarkable results in crystal segmentation of individual nanoparticles. This approach enables its application to dynamic analysis of particle transformation recorded with aberration‐corrected microscopy, offering new possibilities of analysis at nanometric scale.

     
    more » « less
  4. A new image steganography method is proposed for data hiding. This method uses least significant bit (LSB) insertion to hide a message in one of the facial features of a given image. The proposed technique chooses an image of a face from a dataset of 8-bit color images of head poses and performs facial recognition on the image to extract the Cartesian coordinates of the eyes, mouth, and nose. A facial feature is chosen at random and each bit of the binary representation of the message is hidden at the least significant bit in the pixels of the chosen facial feature. © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. 
    more » « less
  5. Abstract

    The implementation of intelligent software to identify and classify objects and individuals in visual fields is a technology of growing importance to operatives in many fields, including wildlife conservation and management. To non-experts, the methods can be abstruse and the results mystifying. Here, in the context of applying cutting edge methods to classify wildlife species from camera-trap data, we shed light on the methods themselves and types of features these methods extract to make efficient identifications and reliable classifications. The current state of the art is to employ convolutional neural networks (CNN) encoded within deep-learning algorithms. We outline these methods and present results obtained in training a CNN to classify 20 African wildlife species with an overall accuracy of 87.5% from a dataset containing 111,467 images. We demonstrate the application of a gradient-weighted class-activation-mapping (Grad-CAM) procedure to extract the most salient pixels in the final convolution layer. We show that these pixels highlight features in particular images that in some cases are similar to those used to train humans to identify these species. Further, we used mutual information methods to identify the neurons in the final convolution layer that consistently respond most strongly across a set of images of one particular species. We then interpret the features in the image where the strongest responses occur, and present dataset biases that were revealed by these extracted features. We also used hierarchical clustering of feature vectors (i.e., the state of the final fully-connected layer in the CNN) associated with each image to produce a visual similarity dendrogram of identified species. Finally, we evaluated the relative unfamiliarity of images that were not part of the training set when these images were one of the 20 species “known” to our CNN in contrast to images of the species that were “unknown” to our CNN.

     
    more » « less