Title: Texture Feature Extraction From Free-Viewing Scan Paths Using Gabor Filters With Downsampling
Texture-based features computed on eye movement scan paths have recently been proposed for eye movement biometric applications. Feature vectors were extracted within this prior work by computing the mean and standard deviation of the resulting images obtained through application of a Gabor filter bank. This paper describes preliminary work exploring an alternative technique for extracting features from Gabor filtered scan path images. Namely, features vectors are obtained by downsampling the filtered images, thereby retaining structured spatial information within the feature vector. The proposed technique is validated at various downsampling scales for data collected from 94 subjects during free-viewing of a fantasy movie trailer. The approach is demonstrated to reduce EER versus the previously proposed statistical summary technique by 11.7% for the best evaluated downsampling parameter. more »« less
Lohr, Dillon; Griffith, Henry; Aziz, Samantha; Komogortsev, Oleg
(, IEEE International Joint Conference on Biometrics (IJCB))
null
(Ed.)
Metric learning is a valuable technique for enabling the ongoing enrollment of new users within biometric systems. While this approach has been heavily employed for other biometric modalities such as facial recognition, applications to eye movements have only recently been explored. This manuscript further investigates the application of metric learning to eye movement biometrics. A set of three multilayer perceptron networks are trained for embedding feature vectors describing three classes of eye movements: fixations, saccades, and post-saccadic oscillations. The network is validated on a dataset containing eye movement traces of 269 subjects recorded during a reading task. The proposed algorithm is benchmarked against a previously introduced statistical biometric approach. While mean equal error rate (EER) was increased versus the benchmark method, the proposed technique demonstrated lower dispersion in EER across the four test folds considered herein.
Lee, ChaBum; Guo, Xiangyu
(, Surface Topography: Metrology and Properties)
Abstract We present a feature-selective segmentation and merging technique to achieve spatially resolved surface profiles of the parts by 3D stereoscopy and strobo-stereoscopy. A pair of vision cameras capture images of the parts at different angles, and 3D stereoscopic images can be reconstructed. Conventional filtering processes of the 3D images involve data loss and lower the spatial resolution of the image. In this study, the 3D reconstructed image was spatially resolved by automatically recognizing and segmenting the features on the raw images, locally and adaptively applying super-resolution algorithm to the segmented images based on the classified features, and then merging those filtered segments. Here, the features are transformed into masks that selectively separate the features and background images for segmentation. The experimental results were compared with those of conventional filtering methods by using Gaussian filters and bandpass filters in terms of spatial frequency and profile accuracy. As a result, the selective feature segmentation technique was capable of spatially resolved 3D stereoscopic imaging while preserving imaging features.
Kubicek, Bernice; Sen_Gupta, Ananya; Kirsteins, Ivars
(, The Journal of the Acoustical Society of America)
Michalopolou, Zoi-Heleni
(Ed.)
This paper introduces a feature extraction technique that identifies highly informative features from sonar magnitude spectra for automated target classification. The approach involves creating feature representations through convolution of a two-dimensional Gabor wavelet and acoustic color magnitudes to capture elastic waves. This feature representation contains extracted localized features in the form of Gabor stripes, which are representative of unique targets and are invariant of target aspect angle. Further processing removes non-informative features through a threshold-based culling. This paper presents an approach that begins connecting model-based domain knowledge with machine learning techniques to allow interpretation of the extracted features while simultaneously enabling robust target classification. The relative performance of three supervised machine learning classifiers, specifically a support vector machine, random forest, and feed-forward neural network are used to quantitatively demonstrate the representations' informationally rich extracted features. Classifiers are trained and tested with acoustic color spectrograms and features extracted using the algorithm, interpreted as stripes, from two public domain field datasets. An increase in classification performance is generally seen, with the largest being a 47% increase from the random forest tree trained on the 1–31 kHz PondEx10 data, suggesting relatively small datasets can achieve high classification accuracy if model-cognizant feature extraction is utilized.
Marella, Pranay; Straub, Jeremy; Bernard, Benjamin
(, Proceedings of the 2019 International Conference on Computational Science and Computational Intelligence (CSCI))
Wipperman, Matthew F.; Pogoncheff, Galen; Mateo, Katrina F.; Wu, Xuefang; Chen, Yiziying; Levy, Oren; Avbersek, Andreja; Deterding, Robin R.; Hamon, Sara C.; Vu, Tam; et al
(, PLOS Digital Health)
Li-Jessen, Nicole Yee-Key
(Ed.)
The Earable device is a behind-the-ear wearable originally developed to measure cognitive function. Since Earable measures electroencephalography (EEG), electromyography (EMG), and electrooculography (EOG), it may also have the potential to objectively quantify facial muscle and eye movement activities relevant in the assessment of neuromuscular disorders. As an initial step to developing a digital assessment in neuromuscular disorders, a pilot study was conducted to determine whether the Earable device could be utilized to objectively measure facial muscle and eye movements intended to be representative of Performance Outcome Assessments, (PerfOs) with tasks designed to model clinical PerfOs, referred to as mock-PerfO activities. The specific aims of this study were: To determine whether the Earable raw EMG, EOG, and EEG signals could be processed to extract features describing these waveforms; To determine Earable feature data quality, test re-test reliability, and statistical properties; To determine whether features derived from Earable could be used to determine the difference between various facial muscle and eye movement activities; and, To determine what features and feature types are important for mock-PerfO activity level classification. A total of N = 10 healthy volunteers participated in the study. Each study participant performed 16 mock-PerfOs activities, including talking, chewing, swallowing, eye closure, gazing in different directions, puffing cheeks, chewing an apple, and making various facial expressions. Each activity was repeated four times in the morning and four times at night. A total of 161 summary features were extracted from the EEG, EMG, and EOG bio-sensor data. Feature vectors were used as input to machine learning models to classify the mock-PerfO activities, and model performance was evaluated on a held-out test set. Additionally, a convolutional neural network (CNN) was used to classify low-level representations of the raw bio-sensor data for each task, and model performance was correspondingly evaluated and compared directly to feature classification performance. The model’s prediction accuracy on the Earable device’s classification ability was quantitatively assessed. Study results indicate that Earable can potentially quantify different aspects of facial and eye movements and may be used to differentiate mock-PerfO activities. Specially, Earable was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9. While EMG features contribute to classification accuracy for all tasks, EOG features are important for classifying gaze tasks. Finally, we found that analysis with summary features outperformed a CNN for activity classification. We believe Earable may be used to measure cranial muscle activity relevant for neuromuscular disorder assessment. Classification performance of mock-PerfO activities with summary features enables a strategy for detecting disease-specific signals relative to controls, as well as the monitoring of intra-subject treatment responses. Further testing is needed to evaluate the Earable device in clinical populations and clinical development settings.
Griffith, Henry K., and Komogortsev, Oleg V. Texture Feature Extraction From Free-Viewing Scan Paths Using Gabor Filters With Downsampling. Retrieved from https://par.nsf.gov/biblio/10285746. ACM Symposium on Eye Tracking Research and Applications . Web. doi:10.1145/3379157.3391423.
Griffith, Henry K., & Komogortsev, Oleg V. Texture Feature Extraction From Free-Viewing Scan Paths Using Gabor Filters With Downsampling. ACM Symposium on Eye Tracking Research and Applications, (). Retrieved from https://par.nsf.gov/biblio/10285746. https://doi.org/10.1145/3379157.3391423
Griffith, Henry K., and Komogortsev, Oleg V.
"Texture Feature Extraction From Free-Viewing Scan Paths Using Gabor Filters With Downsampling". ACM Symposium on Eye Tracking Research and Applications (). Country unknown/Code not available. https://doi.org/10.1145/3379157.3391423.https://par.nsf.gov/biblio/10285746.
@article{osti_10285746,
place = {Country unknown/Code not available},
title = {Texture Feature Extraction From Free-Viewing Scan Paths Using Gabor Filters With Downsampling},
url = {https://par.nsf.gov/biblio/10285746},
DOI = {10.1145/3379157.3391423},
abstractNote = {Texture-based features computed on eye movement scan paths have recently been proposed for eye movement biometric applications. Feature vectors were extracted within this prior work by computing the mean and standard deviation of the resulting images obtained through application of a Gabor filter bank. This paper describes preliminary work exploring an alternative technique for extracting features from Gabor filtered scan path images. Namely, features vectors are obtained by downsampling the filtered images, thereby retaining structured spatial information within the feature vector. The proposed technique is validated at various downsampling scales for data collected from 94 subjects during free-viewing of a fantasy movie trailer. The approach is demonstrated to reduce EER versus the previously proposed statistical summary technique by 11.7% for the best evaluated downsampling parameter.},
journal = {ACM Symposium on Eye Tracking Research and Applications},
author = {Griffith, Henry K. and Komogortsev, Oleg V.},
editor = {null}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.