skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Improving the Visibility of Underwater Video in Turbid Aqueous Environments
Water turbidity is a frequent impediment for achieving satisfactory imaging clarity in underwater video and inhibits the extraction of information concerning the condition of submerged structures. Ports, rivers, lakes, and inland waterways are notoriously difficult spots for camera inspections, in particular for hull inspections in lieu of dry-docking. This complex problem motivated us to study methods to extract a cleaner image /video footage from the acquired one. The purpose of this paper is to describe a novel mathematical model for the degradation of images due to underwater turbidity caused by suspended silt particulates and algae organisms and to propose methods to improve image and video clarity using multiscale non-linear transforms.  more » « less
Award ID(s):
1720487
PAR ID:
10293508
Author(s) / Creator(s):
Date Published:
Journal Name:
SNAME Maritime Convention
Page Range / eLocation ID:
SNAME-SMC-2020-090
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. It is generally assumed that oceanic effects, such as absorption, scattering, and turbulence, deteriorate underwater optical imaging and/or signal detection. In this paper, we present an interesting observation that slight turbidity may actually improve the performance of underwater optical imaging in the presence of occlusion. We have carried out simulations and optical experiments in underwater degraded environments to investigate this hypothesis. For simulation, the Monte Carlo method was used to analyze the influence of imaging performance under varying turbidity and occlusion conditions. Additionally, optical experiments were conducted under various turbid and partially occluded environments. We considered the effects of different parameters such as varying turbidity levels, severity of partial occlusion, number of photons, propagation distances, and imaging modality. The simulation results we performed suggest that, regardless of the variation of the imaging system and degradation parameters, slight turbidity may improve underwater imaging performance in occlusion. The optical experimental results are also in agreement with the simulation results that slightly increasing the turbidity levels may boost the image quality in the scenarios we considered. To the best of our knowledge, this is the first report to theoretically analyze and experimentally validate the phenomenon that turbidity may improve underwater imaging performance in certain degraded environments. 
    more » « less
  2. Underwater image enhancement and turbidity removal (dehazing) is a very challenging problem, not only due to the sheer variety of environments where it is applicable, but also due to the lack of high-resolution, labelled image data. In this paper, we present a novel, two-step deep learning approach for underwater image dehazing and colour correction. In iDehaze, we leverage computer graphics to physically model light propagation in underwater conditions. Specifically, we construct a three-dimensional, photorealistic simulation of underwater environments, and use them to gather a large supervised training dataset. We then train a deep convolutional neural network to remove the haze in these images, then train a second network to transform the colour space of the dehazed images onto a target domain. Experiments demonstrate that our two-step iDehaze method is substantially more effective at producing high-quality underwater images, achieving state-of-the-art performance on multiple datasets. Code, data and benchmarks will be open sourced. 
    more » « less
  3. Abstract—Current state-of-the-art object tracking methods have largely benefited from the public availability of numerous benchmark datasets. However, the focus has been on open-air imagery and much less on underwater visual data. Inherent underwater distortions, such as color loss, poor contrast, and underexposure, caused by attenuation of light, refraction, and scattering, greatly affect the visual quality of underwater data, and as such, existing open-air trackers perform less efficiently on such data. To help bridge this gap, this article proposes a first comprehensive underwater object tracking (UOT100) benchmark dataset to facilitate the development of tracking algorithms well-suited for underwater environments. The proposed dataset consists of 104 underwater video sequences and more than 74 000 annotated frames derived from both natural and artificial underwater videos, with great varieties of distortions. We benchmark the performance of 20 state-of-the-art object tracking algorithms and further introduce a cascaded residual network for underwater image enhancement model to improve tracking accuracy and success rate of trackers. Our experimental results demonstrate the shortcomings of existing tracking algorithms on underwater data and how our generative adversarial network (GAN)-based enhancement model can be used to improve tracking performance. We also evaluate the visual quality of our model’s output against existing GAN-based methods using well-accepted quality metrics and demonstrate that our model yields better visual data. Index Terms—Underwater benchmark dataset, underwater generative adversarial network (GAN), underwater image enhancement (UIE), underwater object tracking (UOT). 
    more » « less
  4. Image restoration and denoising has been a challenging problem in optics and computer vision. There has been active research in the optics and imaging communities to develop a robust, data-efficient system for image restoration tasks. Recently, physics-informed deep learning has received wide interest in scientific problems. In this paper, we introduce a three-dimensional integral imaging-based physics-informed unsupervised CycleGAN algorithm for underwater image descattering and recovery using physics-informed CycleGAN (Generative Adversarial Network). The system consists of a forward and backward pass. The base architecture consists of an encoder and a decoder. The encoder takes the clean image along with the depth map and the degradation parameters to produce the degraded image. The decoder takes the degraded image generated by the encoder along with the depth map and produces the clean image along with the degradation parameters. In order to provide physical significance for the input degradation parameter w.r.t a physical model for the degradation, we also incorporated the physical model into the loss function. The proposed model has been assessed under the dataset curated through underwater experiments at various levels of turbidity. In addition to recovering the original image from the degraded image, the proposed algorithm also helps to model the distribution under which the degraded images have been sampled. Furthermore, the proposed three-dimensional Integral Imaging approach is compared with the traditional deep learning-based approach and 2D imaging approach under turbid and partially occluded environments. The results suggest the proposed approach is promising, especially under the above experimental conditions. 
    more » « less
  5. In this paper, we propose a real-time deep-learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots localizing themselves, in a communicationconstrained underwater environment, is essential for many applications such as underwater exploration, mapping, multirobot convoying, and other multi-robot tasks. Due to the profound difficulty of collecting ground truth images with accurate 6D poses underwater, this work utilizes rendered images from the Unreal Game Engine simulation for training. An image translation network is employed to bridge the gap between the rendered and the real images producing synthetic images for training. The proposed method predicts the 6D pose of an AUV from a single image as 2D image keypoints representing 8 corners of the 3D model of the AUV, and then the 6D pose in the camera coordinates is determined using RANSACbased PnP. Experimental results in underwater environments (swimming pool and ocean) with different cameras demonstrate the robustness of the proposed technique, where the trained system decreased translation error by 75.5\% and orientation error by 64.6\% over the state-of-the-art methods. 
    more » « less