As eye tracking can reduce the computational burden of virtual reality devices through a technique known as foveated rendering, we believe not only that eye tracking will be implemented in all virtual reality devices, but that eye tracking biometrics will become the standard method of authentication in virtual reality. Thus, we have created a real-time eye movement-driven authentication system for virtual reality devices. In this work, we describe the architecture of the system and provide a specific implementation that is done using the FOVE head-mounted display. We end with an exploration into future topics of research to spur thought and discussion.
more »
« less
Aerial Border Surveillance for Search and Rescue Missions Using Eye Tracking Techniques
which can assure the security of the country boarder and aid in search and rescue missions. This paper offers a novel “handsfree” tool for aerial border surveillance, search and rescue missions using head-mounted eye tracking technology. The contributions of this work are: i) a gaze based aerial boarder surveillance object classification and recognition framework; ii) real-time object detection and identification system in nonscanned regions; iii) investigating the scan-path (fixation and non-scanned) provided by mobile eye tracker can help improve training professional search and rescue organizations or even artificial intelligence robots for searching and rescuing missions. The proposed system architecture is further demonstrated using a dataset of large-scale real-life head-mounted eye tracking data. Keywords—Head-mounted eye tracking technology, Aerial border surveillance, and search and rescue missions
more »
« less
- Award ID(s):
- 1942053
- PAR ID:
- 10309924
- Date Published:
- Journal Name:
- 2018 IEEE International Symposium on Technologies for Homeland Security (HST)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Unmanned Aerial Vehicles (UAVs) are increasingly used by emergency responders to support search-and-rescue operations, medical supplies delivery, fire surveillance, and many other scenarios. At the same time, researchers are investigating usage scenarios in which UAVs are imbued with a greater level of autonomy to provide automated search, surveillance, and delivery capabilities that far exceed current adoption practices. To address this emergent opportunity, we are developing a configurable, multi-user, multi-UAV system for supporting the use of semi-autonomous UAVs in diverse emergency response missions. We present a requirements-driven approach for creating a software product line (SPL) of highly configurable scenarios based on different missions. We focus on the process for eliciting and modeling a family of related use cases, constructing individual feature models, and activity diagrams for each scenario, and then merging them into an SPL. We show how the SPL will be implemented through leveraging and augmenting existing features in our DroneResponse system. We further present a configuration tool, and demonstrate its ability to generate mission-specific configurations for 20 different use case scenarios.more » « less
-
null (Ed.)Human action recognition is an important topic in artificial intelligence with a wide range of applications including surveillance systems, search-and-rescue operations, human-computer interaction, etc. However, most of the current action recognition systems utilize videos captured by stationary cameras. Another emerging technology is the use of unmanned ground and aerial vehicles (UAV/UGV) for different tasks such as transportation, traffic control, border patrolling, wild-life monitoring, etc. This technology has become more popular in recent years due to its affordability, high maneuverability, and limited human interventions. However, there does not exist an efficient action recognition algorithm for UAV-based monitoring platforms. This paper considers UAV-based video action recognition by addressing the key issues of aerial imaging systems such as camera motion and vibration, low resolution, and tiny human size. In particular, we propose an automated deep learning-based action recognition system which includes the three stages of video stabilization using the SURF feature selection and Lucas-Kanade method, human action area detection using faster region-based convolutional neural networks (R-CNN), and action recognition. We propose a novel structure that extends and modifies the InceptionResNet-v2 architecture by combining a 3D CNN architecture and a residual network for action recognition. We achieve an average accuracy of 85.83% for the entire-video-level recognition when applying our algorithm to the popular UCF-ARG aerial imaging dataset. This accuracy significantly improves upon the state-of-the-art accuracy by a margin of 17%.more » « less
-
Abstract Eye tracking is becoming increasingly available in head-mounted virtual reality displays with various headsets with integrated eye trackers already commercially available. The applications of eye tracking in virtual reality are highly diversified and span multiple disciplines. As a result, the number of peer-reviewed publications that study eye tracking applications has surged in recent years. We performed a broad review to comprehensively search academic literature databases with the aim of assessing the extent of published research dealing with applications of eye tracking in virtual reality, and highlighting challenges, limitations and areas for future research.more » « less
-
Simultaneous head and eye tracking has traditionally been confined to a laboratory setting and real-world motion tracking limited to measuring linear acceleration and angular velocity. Recently available mobile devices such as the Pupil Core eye tracker and the Intel RealSense T265 motion tracker promise to deliver accurate measurements outside the lab. Here, the researchers propose a hard- and software framework that combines both devices into a robust, usable, low-cost head and eye tracking system. The developed software is open source and the required hardware modifications can be 3D printed. The researchers demonstrate the system’s ability to measure head and eye movements in two tasks: an eyes-fixed head rotation task eliciting the vestibulo-ocular reflex inside the laboratory, and a natural locomotion task where a subject walks around a building outside of the laboratory. The resultant head and eye movements are discussed, as well as future implementations of this system.more » « less
An official website of the United States government

