Image restoration aims to recover a clean image given a noisy image. It has long been a topic of interest for researchers in imaging, optical science and computer vision. As the imaging environment becomes more and more deteriorated, the problem becomes more challenging. Several computational approaches, ranging from statistical to deep learning, have been proposed over the years to tackle this problem. The deep learning-based approaches provided promising image restoration results, but it’s purely data driven and the requirement of large datasets (paired or unpaired) for training might demean its utility for certain physical problems. Recently, physics informed image restoration techniques have gained importance due to their ability to enhance performance, infer some sense of the degradation process and its potential to quantify the uncertainty in the prediction results. In this paper, we propose a physics informed deep learning approach with simultaneous parameter estimation using 3D integral imaging and Bayesian neural network (BNN). An image-image mapping architecture is first pretrained to generate a clean image from the degraded image, which is then utilized for simultaneous training with Bayesian neural network for simultaneous parameter estimation. For the network training, simulated data using the physical model has been utilized instead of actual degraded data. The proposed approach has been tested experimentally under degradations such as low illumination and partial occlusion. The recovery results are promising despite training from a simulated dataset. We have tested the performance of the approach under varying levels of illumination condition. Additionally, the proposed approach also has been analyzed against corresponding 2D imaging-based approach. The results suggest significant improvements compared to 2D even training under similar datasets. Also, the parameter estimation results demonstrate the utility of the approach in estimating the degradation parameter in addition to image restoration under the experimental conditions considered.
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
-
Lensless devices paired with deep learning models have recently shown great promise as a novel approach to biological screening. As a first step toward performing automated lensless cell identification non-invasively, we present a field-portable, compact lensless system that can detect and classify smeared whole blood samples through layers of scattering media. In this system, light from a partially coherent laser diode propagates through the sample, which is positioned between two layers of scattering media, and the resultant opto-biological signature is captured by an image sensor. The signature is transformed via local binary pattern (LBP) transformation, and the resultant LBP images are processed by a convolutional neural network (CNN) to identify the type of red blood cells in the sample. We validated our system in an experimental setup where whole blood samples are placed between two diffusive layers of increasing thickness, and the robustness of the system against variations in the layer thickness is investigated. Several CNN models were considered (i.e., AlexNet, VGG-16, and SqueezeNet), individually optimized, and compared against a traditional learning model that consists of principal component decomposition and support vector machine (PCA + SVM). We found that a two-stage SqueezeNet architecture and VGG-16 provide the highest classification accuracy and Matthew’s correlation coefficient (MCC) score when applied to images acquired by our lensless system, with SqueezeNet outperforming the other classifiers when the thickness of the scattering layer is the same in training and test data (accuracy: 97.2%; MCC: 0.96), and VGG-16 resulting the most robust option as the thickness of the scattering layers in test data increases up to three times the value used during training. Altogether, this work provides proof-of-concept for non-invasive blood sample identification through scattering media with lensless devices using deep learning. Our system has the potential to be a viable diagnosis device because of its low cost, field portability, and high identification accuracy.
-
We propose polarimetric three-dimensional (3D) integral imaging profilometry and investigate its performance under degraded environmental conditions in terms of the accuracy of object depth acquisition. Integral imaging based profilometry provides depth information by capturing and utilizing multiple perspectives of the observed object. However, the performance of depth map generation may degrade due to light condition, partial occlusions, and object surface material. To improve the accuracy of depth estimation in these conditions, we propose to use polarimetric profilometry. Our experiments indicate that the proposed approach may result in more accurate depth estimation under degraded environmental conditions. We measure a number of metrics to evaluate the performance of the proposed polarimetric profilometry methods for generating the depth map under degraded conditions. Experimental results are presented to evaluate the robustness of the proposed method under degraded environment conditions and compare its performance with conventional integral imaging. To the best of our knowledge, this is the first report on polarimetric 3D integral imaging profilometry, and its performance under degraded environments.
-
The two-point source longitudinal resolution of three-dimensional integral imaging depends on several factors including the number of sensors, sensor pixel size, pitch between sensors, and the lens point spread function. We assume the two-point sources to be resolved if their point spread functions can be resolved in any one of the sensors. Previous studies of integral imaging longitudinal resolution either rely on geometrical optics formulation or assume the point spread function to be of sub-pixel size, thus neglecting the effect of the lens. These studies also assume both point sources to be in focus in captured elemental images. More importantly, the previous analysis does not consider the effect of noise. In this manuscript, we use the Gaussian process-based two-point source resolution criterion to overcome these limitations. We compute the circle of confusion to model the out-of-focus blurring effect. The Gaussian process-based two-point source resolution criterion allows us to study the effect of noise on the longitudinal resolution. In the absence of noise, we also present a simple analytical expression for longitudinal resolution which approximately matches the Gaussian process-based formulation. Also, we investigate the dependence of the longitudinal resolution on the parallax of the integral imaging system. We present optical experiments to validate our results. The experiments demonstrate agreement with our Gaussian process-based two-point source resolution criteria.
-
The study of high-speed phenomena in underwater environments is pivotal across diverse scientific and engineering domains. This paper introduces a high-speed (3D) integral imaging (InIm) based system to 1) visualize high-speed dynamic underwater events, and 2) detect modulated signals for potential optical communication applications. The proposed system is composed of a high-speed camera with a lenslet array-based integral imaging setup to capture and reconstruct 3D images of underwater scenes and detect temporally modulated optical signals. For 3D visualization, we present experiments to capture the elemental images of high-speed underwater events with passive integral imaging, which were then computationally reconstructed to visualize 3D dynamic underwater scenes. We present experiments for 3D imaging and reconstruct the depth map of high-speed underwater dynamic jets of air bubbles, offering depth information and visualizing the 3D movement of these jets. To detect temporally modulated optical signals, we present experiments to demonstrate the ability to capture and reconstruct high-speed underwater modulated optical signals in turbidity. To the best of our knowledge, this is the first report on high-speed underwater 3D integral imaging for 3D visualization and optical signal communication. The findings illustrate the potential of high-speed integral imaging in the visualization and detection of underwater dynamic events, which can be useful in underwater exploration and monitoring.
-
Free, publicly-accessible full text available May 24, 2025
-
Free, publicly-accessible full text available May 23, 2025
-
Free, publicly-accessible full text available May 23, 2025
-
In this paper, we assess the noise-susceptibility of coherent macroscopic single random phase encoding (SRPE) lensless imaging by analyzing how much information is lost due to the presence of camera noise. We have used numerical simulation to first obtain the noise-free point spread function (PSF) of a diffuser-based SRPE system. Afterwards, we generated a noisy PSF by introducing shot noise, read noise and quantization noise as seen in a real-world camera. Then, we used various statistical measures to look at how the shared information content between the noise-free and noisy PSF is affected as the camera-noise becomes stronger. We have run identical simulations by replacing the diffuser in the lensless SRPE imaging system with lenses for comparison with lens-based imaging. Our results show that SRPE lensless imaging systems are better at retaining information between corresponding noisy and noiseless PSFs under high camera noise than lens-based imaging systems. We have also looked at how physical parameters of diffusers such as feature size and feature height variation affect the noise robustness of an SRPE system. To the best of our knowledge, this is the first report to investigate noise robustness of SRPE systems as a function of diffuser parameters and paves the way for the use of lensless SRPE systems to improve imaging in the presence of image sensor noise.
-
Integral imaging has proven useful for three-dimensional (3D) object visualization in adverse environmental conditions such as partial occlusion and low light. This paper considers the problem of 3D object tracking. Two-dimensional (2D) object tracking within a scene is an active research area. Several recent algorithms use object detection methods to obtain 2D bounding boxes around objects of interest in each frame. Then, one bounding box can be selected out of many for each object of interest using motion prediction algorithms. Many of these algorithms rely on images obtained using traditional 2D imaging systems. A growing literature demonstrates the advantage of using 3D integral imaging instead of traditional 2D imaging for object detection and visualization in adverse environmental conditions. Integral imaging’s depth sectioning ability has also proven beneficial for object detection and visualization. Integral imaging captures an object’s depth in addition to its 2D spatial position in each frame. A recent study uses integral imaging for the 3D reconstruction of the scene for object classification and utilizes the mutual information between the object’s bounding box in this 3D reconstructed scene and the 2D central perspective to achieve passive depth estimation. We build over this method by using Bayesian optimization to track the object’s depth in as few 3D reconstructions as possible. We study the performance of our approach on laboratory scenes with occluded objects moving in 3D and show that the proposed approach outperforms 2D object tracking. In our experimental setup, mutual information-based depth estimation with Bayesian optimization achieves depth tracking with as few as two 3D reconstructions per frame which corresponds to the theoretical minimum number of 3D reconstructions required for depth estimation. To the best of our knowledge, this is the first report on 3D object tracking using the proposed approach.