Abstract Imaging underwater environments is of great importance to marine sciences, sustainability, climatology, defense, robotics, geology, space exploration, and food security. Despite advances in underwater imaging, most of the ocean and marine organisms remain unobserved and undiscovered. Existing methods for underwater imaging are unsuitable for scalable, long-term, in situ observations because they require tethering for power and communication. Here we describe underwater backscatter imaging, a method for scalable, real-time wireless imaging of underwater environments using fully-submerged battery-free cameras. The cameras power up from harvested acoustic energy, capture color images using ultra-low-power active illumination and a monochrome image sensor, and communicate wirelessly at net-zero-power via acoustic backscatter. We demonstrate wireless battery-free imaging of animals, plants, pollutants, and localization tags in enclosed and open-water environments. The method’s self-sustaining nature makes it desirable for massive, continuous, and long-term ocean deployments with many applications including marine life discovery, submarine surveillance, and underwater climate change monitoring.
more »
« less
SeaScan: An Energy-Efficient Underwater Camera for Wireless 3D Color Imaging
We present the design, implementation, and evaluation of SeaScan, an energy-efficient camera for 3D imaging of underwater environments. At the core of SeaScan’s design is a trinocular lensing system, which employs three ultra-lowpower monochromatic image sensors to reconstruct color images. Each of the sensors is equipped with a different filter (red, green, and blue) for color capture. The design introduces multiple innovations to enable reconstructing 3D color images from the captured monochromatic ones. This includes an ML-based cross-color alignment architecture to combine the monochromatic images. It also includes a cross-refractive compensation technique that overcomes the distortion of the wide-angle imaging of the low-power CMOS sensors in underwater environments.We built an end-to-end prototype of SeaScan, including color filter integration, 3D reconstruction, compression, and underwater backscatter communication. Our evaluation in real-world underwater environments demonstrates that SeaScan can capture underwater color images with as little as 23.6 mJ, which represents 37× reduction in energy consumption in comparison to the lowest-energy state-of-the-art underwater imaging system.We also report qualitative and quantitative evaluation of SeaScan’s color reconstruction and demonstrate its success in comparison to multiple potential alternative techniques (both geometric and ML-based) in the literature. SeaScan’s ability to image underwater environments at such low energy opens up important applications in long-term monitoring for ocean climate change, seafood production, and scientific discovery.
more »
« less
- Award ID(s):
- 2308901
- PAR ID:
- 10643281
- Publisher / Repository:
- ACM
- Date Published:
- Page Range / eLocation ID:
- 785 to 799
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The study of high-speed phenomena in underwater environments is pivotal across diverse scientific and engineering domains. This paper introduces a high-speed (3D) integral imaging (InIm) based system to 1) visualize high-speed dynamic underwater events, and 2) detect modulated signals for potential optical communication applications. The proposed system is composed of a high-speed camera with a lenslet array-based integral imaging setup to capture and reconstruct 3D images of underwater scenes and detect temporally modulated optical signals. For 3D visualization, we present experiments to capture the elemental images of high-speed underwater events with passive integral imaging, which were then computationally reconstructed to visualize 3D dynamic underwater scenes. We present experiments for 3D imaging and reconstruct the depth map of high-speed underwater dynamic jets of air bubbles, offering depth information and visualizing the 3D movement of these jets. To detect temporally modulated optical signals, we present experiments to demonstrate the ability to capture and reconstruct high-speed underwater modulated optical signals in turbidity. To the best of our knowledge, this is the first report on high-speed underwater 3D integral imaging for 3D visualization and optical signal communication. The findings illustrate the potential of high-speed integral imaging in the visualization and detection of underwater dynamic events, which can be useful in underwater exploration and monitoring.more » « less
-
Billard, A.; Asfour, T.; Khatib, O. (Ed.)Underwater navigation presents several challenges, including unstructured unknown environments, lack of reliable localization systems (e.g., GPS), and poor visibility. Furthermore, good-quality obstacle detection sensors for underwater robots are scant and costly; and many sensors like RGB-D cameras and LiDAR only work in-air. To enable reliable mapless underwater navigation despite these challenges, we propose a low-cost end-to-end navigation system, based on a monocular camera and a fixed single-beam echo-sounder, that efficiently navigates an underwater robot to waypoints while avoiding nearby obstacles. Our proposed method is based on Proximal Policy Optimization (PPO), which takes as input current relative goal information, estimated depth images, echo-sounder readings, and previous executed actions, and outputs 3D robot actions in a normalized scale. End-to-end training was done in simulation, where we adopted domain randomization (varying underwater conditions and visibility) to learn a robust policy against noise and changes in visibility conditions. The experiments in simulation and real-world demonstrated that our proposed method is successful and resilient in navigating a low-cost underwater robot in unknown underwater environments. The implementation is made publicly available at https://github.com/dartmouthrobotics/deeprl-uw-robot-navigation.more » « less
-
Imaging low-light high dynamic range (HDR) scenes in a single capture is challenging for conventional sensors when exposure bracketing is not feasible due to application constraints. Advancements in sensor technology have narrowed the gap, as split-pixel and dual conversion gain (DCG) enables single-frame HDR capture and Quanta Image Sensors (QIS) allow counting individual photons at low light. However, removing shot noise from a single HDR image remains a difficult task due to the spatially varying nature of noise. To address this issue, we propose a learnable pipeline with a modular design for processing high bit-depth QIS raw images. Compared to existing algorithmic solutions, our approach offers superior reconstruction performance and greater robustness to variations in illuminance and noise.more » « less
-
Bayer pattern is a widely used Color Filter Array (CFA) for digital image sensors, efficiently capturing different light wavelengths on different pixels without the need for a costly ISP pipeline. The resulting single-channel raw Bayer images offer benefits such as spectral wavelength sensitivity and low time latency. However, object detection based on Bayer images has been underexplored due to challenges in human observation and algorithm design caused by the discontinuous color channels in adjacent pixels. To address this issue, we propose the BayerDetect network, an end-to-end deep object detection framework that aims to achieve fast, accurate, and memory-efficient object detection. Unlike RGB color images, where each pixel encodes spectral context from adjacent pixels during ISP color interpolation, raw Bayer images lack spectral context. To enhance the spectral context, the BayerDetect network introduces a spectral frequency attention block, transforming the raw Bayer image pattern to the frequency domain. In object detection, clear object boundaries are essential for accurate bounding box predictions. To handle the challenges posed by alternating spectral channels and mitigate the influence of discontinuous boundaries, the BayerDetect network incorporates a spatial attention scheme that utilizes deformable convolutional kernels in multiple scales to explore spatial context effectively. The extracted convolutional features are then passed through a sparse set of proposal boxes for detection and classification. We conducted experiments on both public and self-collected raw Bayer images, and the results demonstrate the superb performance of the BayerDetect network in object detection tasks.more » « less
An official website of the United States government

