skip to main content


Search for: All records

Award ID contains: 1942053

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Agaian, Sos S. ; Jassim, Sabah A. ; DelMarco, Stephen P. ; Asari, Vijayan K. (Ed.)
    Recognizing the model of a vehicle in natural scene images is an important and challenging task for real-life applications. Current methods perform well under controlled conditions, such as frontal and horizontal view-angles or under optimal lighting conditions. Nevertheless, their performance decreases significantly in an unconstrained environment, that may include extreme darkness or over illuminated conditions. Other challenges to recognition systems include input images displaying very low visual quality or considerably low exposure levels. This paper strives to improve vehicle model recognition accuracy in dark scenes by using a deep neural network model. To boost the recognition performance of vehicle models, the approach performs joint enhancement and localization of vehicles for non-uniform-lighting conditions. Experimental results on several public datasets demonstrate the generality and robustness of our framework. It improves vehicle detection rate under poor lighting conditions, localizes objects of interest, and yields better vehicle model recognition accuracy on low-quality input image data. Grants: This work is supported by the US Department of Transportation, Federal Highway Administration (FHWA), grant contract: 693JJ320C000023 Keywords—Image enhancement, vehicle model and 
    more » « less
  2. Abstract—Current state-of-the-art object tracking methods have largely benefited from the public availability of numerous benchmark datasets. However, the focus has been on open-air imagery and much less on underwater visual data. Inherent underwater distortions, such as color loss, poor contrast, and underexposure, caused by attenuation of light, refraction, and scattering, greatly affect the visual quality of underwater data, and as such, existing open-air trackers perform less efficiently on such data. To help bridge this gap, this article proposes a first comprehensive underwater object tracking (UOT100) benchmark dataset to facilitate the development of tracking algorithms well-suited for underwater environments. The proposed dataset consists of 104 underwater video sequences and more than 74 000 annotated frames derived from both natural and artificial underwater videos, with great varieties of distortions. We benchmark the performance of 20 state-of-the-art object tracking algorithms and further introduce a cascaded residual network for underwater image enhancement model to improve tracking accuracy and success rate of trackers. Our experimental results demonstrate the shortcomings of existing tracking algorithms on underwater data and how our generative adversarial network (GAN)-based enhancement model can be used to improve tracking performance. We also evaluate the visual quality of our model’s output against existing GAN-based methods using well-accepted quality metrics and demonstrate that our model yields better visual data. Index Terms—Underwater benchmark dataset, underwater generative adversarial network (GAN), underwater image enhancement (UIE), underwater object tracking (UOT). 
    more » « less
  3. Agaian, Sos S. ; Jassim, Sabah A. ; DelMarco, Stephen P. ; Asari, Vijayan K. (Ed.)
    Neural networks have emerged to be the most appropriate method for tackling the classification problem for hyperspectral images (HIS). Convolutional neural networks (CNNs), being the current state-of-art for various classification tasks, have some limitations in the context of HSI. These CNN models are very susceptible to overfitting because of 1) lack of availability of training samples, 2) large number of parameters to fine-tune. Furthermore, the learning rates used by CNN must be small to avoid vanishing gradients, and thus the gradient descent takes small steps to converge and slows down the model runtime. To overcome these drawbacks, a novel quaternion based hyperspectral image classification network (QHIC Net) is proposed in this paper. The QHIC Net can model both the local dependencies between the spectral channels of a single-pixel and the global structural relationship describing the edges or shapes formed by a group of pixels, making it suitable for HSI datasets that are small and diverse. Experimental results on three HSI datasets demonstrate that the QHIC Net performs on par with the traditional CNN based methods for HSI Classification with a far fewer number of parameters. Keywords: Classification, deep learning, hyperspectral imaging, spectral-spatial feature learning 
    more » « less
  4. Abstract— Navigation, the ability to relocate from one place to another, is a critical skill for any individual or group. Navigating safely across unknown environments is a critical factor in determining the success of a mission. While there is an existing body of applications in the field of land navigation, they primarily rely on GPS-enabled technologies. Moreover, there is limited research on Augmented Reality (AR) as a tool for navigation in unknown environments. This research proposes to develop an AR system to provide 3-dimensional (3D) navigational insights in unfamiliar environments. This can be accomplished by generating 3D terrestrial maps leveraging Synthetic Aperture Radar (SAR) data, Google earth imagery and sparse knowledge of GPS coordinates of the region. Furthermore, the 3D terrestrial images are converted to navigational meshes to make it feasible for path-finding algorithms to work. The proposed method can be used to create an iteratively refined 3D landscape knowledge-database that can assist personnel in navigating novel environments or assist in mission planning for other operations. It can also be used to help plan/access the best strategic vantage points in the landscape. Keywords— navigation, three-dimensional, image processing, mesh, augmented reality, mixed reality, SAR, GPS 
    more » « less
  5. Agaian, Sos S. ; DelMarco, Stephen P. ; Asari, Vijayan K. (Ed.)
    High accuracy localization and user positioning tracking is critical in improving the quality of augmented reality environments. The biggest challenge facing developers is localizing the user based on visible surroundings. Current solutions rely on the Global Positioning System (GPS) for tracking and orientation. However, GPS receivers have an accuracy of about 10 to 30 meters, which is not accurate enough for augmented reality, which needs precision measured in millimeters or smaller. This paper describes the development and demonstration of a head-worn augmented reality (AR) based vision-aid indoor navigation system, which localizes the user without relying on a GPS signal. Commercially available augmented reality head-set allows individuals to capture the field of vision using the front-facing camera in a real-time manner. Utilizing captured image features as navigation-related landmarks allow localizing the user in the absence of a GPS signal. The proposed method involves three steps: a detailed front-scene camera data is collected and generated for landmark recognition; detecting and locating an individual’s current position using feature matching, and display arrows to indicate areas that require more data collects if needed. Computer simulations indicate that the proposed augmented reality-based vision-aid indoor navigation system can provide precise simultaneous localization and mapping in a GPS-denied environment. Keywords: Augmented-reality, navigation, GPS, HoloLens, vision, positioning system, localization 
    more » « less
  6. which can assure the security of the country boarder and aid in search and rescue missions. This paper offers a novel “handsfree” tool for aerial border surveillance, search and rescue missions using head-mounted eye tracking technology. The contributions of this work are: i) a gaze based aerial boarder surveillance object classification and recognition framework; ii) real-time object detection and identification system in nonscanned regions; iii) investigating the scan-path (fixation and non-scanned) provided by mobile eye tracker can help improve training professional search and rescue organizations or even artificial intelligence robots for searching and rescuing missions. The proposed system architecture is further demonstrated using a dataset of large-scale real-life head-mounted eye tracking data. Keywords—Head-mounted eye tracking technology, Aerial border surveillance, and search and rescue missions 
    more » « less
  7. Agaian, Sos S. ; Jassim, Sabah A. (Ed.)
    Face recognition technologies have been in high demand in the past few decades due to the increase in human-computer interactions. It is also one of the essential components in interpreting human emotions, intentions, facial expressions for smart environments. This non-intrusive biometric authentication system relies on identifying unique facial features and pairing alike structures for identification and recognition. Application areas of facial recognition systems include homeland and border security, identification for law enforcement, access control to secure networks, authentication for online banking and video surveillance. While it is easy for humans to recognize faces under varying illumination conditions, it is still a challenging task in computer vision. Non-uniform illumination and uncontrolled operating environments can impair the performance of visual-spectrum based recognition systems. To address these difficulties, a novel Anisotropic Gradient Facial Recognition (AGFR) system that is capable of autonomous thermal infrared to visible face recognition is proposed. The main contribution of this paper includes a framework for thermal/fused-thermal-visible to visible face recognition system and a novel human-visual-system inspired thermal-visible image fusion technique. Extensive computer simulations using CARL, IRIS, AT&T, Yale and Yale-B databases demonstrate the efficiency, accuracy, and robustness of the AGFR system. Keywords: Infrared thermal to visible facial recognition, anisotropic gradient, visible-to-visible face recognition, nonuniform illumination face recognition, thermal and visible face fusion method 
    more » « less