Single image depth estimation is a critical issue for robot vision, augmented reality, and many other applications when an image sequence is not available. Self-supervised single image depth estimation models target at predicting accurate disparity map just from one single image without ground truth supervision or stereo image pair during real applications. Compared with direct single image depth estimation, single image stereo algorithm can generate the depth from different camera perspectives. In this paper, we propose a novel architecture to infer accurate disparity by leveraging both spectral-consistency based learning model and view-prediction based stereo reconstruction algorithm. Direct spectral-consistency based method can avoid false positive matching in smooth regions. Single image stereo can preserve more distinct boundaries from another camera perspective. By learning confidence maps and designing a fusion strategy, the two disparities from the two approaches are able to be effectively fused to produce the refined disparity. Extensive experiments and ablations indicate that our method exploits both advantages of spectral consistency and view prediction, especially in constraining object boundaries and correcting wrong predicting regions.
more »
« less
A comparison of correlation-based plenoptic depth estimation techniques for digital image correlation
This paper presents a comparison of several correlation-based methodologies for depth estimation using a single plenoptic camera. The plenoptic camera offers a distinct advantage by enabling the generation of many perspectives over a relatively small baseline. Unlike stereo reconstruction, which relies on a pair of images for depth estimation, these multiple perspectives are utilized collectively in two distinct approaches for depth estimation. The proposed methods are evaluated using synthetic and experimental data to assess their accuracy. Preliminary results indicate the robust performance of both methods, each exhibiting different strengths under varying conditions. Future work will assess how these methods perform in the context of a simultaneous DIC and 3D PIV measurement using a single plenoptic camera.
more »
« less
- Award ID(s):
- 2145189
- PAR ID:
- 10438980
- Date Published:
- Journal Name:
- Proceedings of the 15th International Symposium on Particle Image Velocimetry
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Visual odometry (VO) and single image depth estimation are critical for robot vision, 3D reconstruction, and camera pose estimation that can be applied to autonomous driving, map building, augmented reality and many other applications. Various supervised learning models have been proposed to train the VO or single image depth estimation framework for each targeted scene to improve the performance recently. However, little effort has been made to learn these separate tasks together without requiring the collection of a significant number of labels. This paper proposes a novel unsupervised learning approach to simultaneously perceive VO and single image depth estimation. In our framework, either of these tasks can benefit from each other through simultaneously learning these two tasks. We correlate these two tasks by enforcing depth consistency between VO and single image depth estimation. Based on the single image depth estimation, we can resolve the most common and challenging scaling issue of monocular VO. Meanwhile, through training from a sequence of images, VO can enhance the single image depth estimation accuracy. The effectiveness of our proposed method is demonstrated through extensive experiments compared with current state-of-the-art methods on the benchmark datasets.more » « less
-
Self-supervised depth estimation has recently demonstrated promising performance compared to the supervised methods on challenging indoor scenes. However, the majority of efforts mainly focus on exploiting photometric and geometric consistency via forward image warping and backward image warping, based on monocular videos or stereo image pairs. The influence of defocus blur to depth estimation is neglected, resulting in a limited performance for objects and scenes in out of focus. In this work, we propose the first framework for simultaneous depth estimation from a single image and image focal stacks using depth-from-defocus and depth-from-focus algorithms. The proposed network is able to learn optimal depth mapping from the information contained in the blur of a single image, generate a simulated image focal stack and all-in-focus image, and train a depth estimator from an image focal stack. In addition to the validation of our method on both synthetic NYUv2 dataset and real DSLR dataset, we also collect our own dataset using a DSLR camera and further verify on it. Experiments demonstrate that our system surpasses the state-of-the-art supervised depth estimation method over 4% in accuracy and achieves superb performance among the methods without direct supervision on the synthesized NYUv2 dataset, which has been rarely explored.more » « less
-
We present an unsupervised learning framework for simultaneously training single-view depth prediction and optical flow estimation models using unlabeled video sequences. Existing unsupervised methods often exploit brightness constancy and spatial smoothness priors to train depth or flow models. In this paper, we propose to leverage geometric consistency as additional supervisory signals. Our core idea is that for rigid regions we can use the predicted scene depth and camera motion to synthesize 2D optical flow by backprojecting the induced 3D scene flow. The discrepancy between the rigid flow (from depth prediction and camera motion) and the estimated flow (from optical flow model) allows us to impose a cross-task consistency loss. While all the networks are jointly optimized during training, they can be applied independently at test time. Extensive experiments demonstrate that our depth and flow models compare favorably with state-of-the-art unsupervised methods.more » « less
-
Abstract—Robotic geo-fencing and surveillance systems require accurate monitoring of objects if/when they violate perimeter restrictions. In this paper, we seek a solution for depth imaging of such objects of interest at high accuracy (few tens of cm) over extended ranges (up to 300 meters) from a single vantage point, such as a pole mounted platform. Unfortunately, the rich literature in depth imaging using camera, lidar and radar in isolation struggles to meet these tight requirements in real-world conditions. This paper proposes Metamoran, a solution that explores long-range depth imaging of objects of interest by fusing the strengths of two complementary technologies: mmWave radar and camera. Unlike cameras, mmWave radars offer excellent cm-scale depth resolution even at very long ranges. However, their angular resolution is at least 10× worse than camera systems. Fusing these two modalities is natural, but in scenes with high clutter and at long ranges, radar reflections are weak and experience spurious artifacts. Metamoran’s core contribution is to leverage image segmentation and monocular depth estimation on camera images to help declutter radar and discover true object reflections.We perform a detailed evaluation of Metamoran’s depth imaging capabilities in 400 diverse scenarios. Our evaluation shows that Metamoran estimates the depth of static objects up to 90 m away and moving objects up to 305 m away and with a median error of 28 cm, an improvement of 13× over a naive radar+camera baseline and 23× compared to monocular depth estimation.more » « less
An official website of the United States government

