skip to main content


Title: Deep Depth Estimation on 360° Images with a Double Quaternion Loss
While 360° images are becoming ubiquitous due to popularity of panoramic content, they cannot directly work with most of the existing depth estimation techniques developed for perspective images. In this paper, we present a deep-learning-based framework of estimating depth from 360° images. We present an adaptive depth refinement procedure that refines depth estimates using normal estimates and pixel-wise uncertainty scores. We introduce double quaternion approximation to combine the loss of the joint estimation of depth and surface normal. Furthermore, we use the double quaternion formulation to also measure stereo consistency between the horizontally displaced depth maps, leading to a new loss function for training a depth estimation CNN. Results show that the new double-quaternion-based loss and the adaptive depth refinement procedure lead to better network performance. Our proposed method can be used with monocular as well as stereo images. When evaluated on several datasets, our method surpasses state-of-the-art methods on most metrics.  more » « less
Award ID(s):
1823321 1564212
NSF-PAR ID:
10280515
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2020 International Conference on 3D Vision (3DV)
Page Range / eLocation ID:
524 to 533
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present 3DVNet, a novel multi-view stereo (MVS) depth-prediction method that combines the advantages of previous depth-based and volumetric MVS approaches. Our key idea is the use of a 3D scene-modeling network that iteratively updates a set of coarse depth predictions, resulting in highly accurate predictions which agree on the underlying scene geometry. Unlike existing depth-prediction techniques, our method uses a volumetric 3D convolutional neural network (CNN) that operates in world space on all depth maps jointly. The network can therefore learn meaningful scene-level priors. Furthermore, unlike existing volumetric MVS techniques, our 3D CNN operates on a feature-augmented point cloud, allowing for effective aggregation of multi-view information and flexible iterative refinement of depth maps. Experimental results show our method exceeds state-of-the-art accuracy in both depth prediction and 3D reconstruction metrics on the ScanNet dataset, as well as a selection of scenes from the TUM-RGBD and ICL-NUIM datasets. This shows that our method is both effective and generalizes to new settings. 
    more » « less
  2. Abstract Adaptive mesh refinement (AMR) is the art of solving PDEs on a mesh hierarchy with increasing mesh refinement at each level of the hierarchy. Accurate treatment on AMR hierarchies requires accurate prolongation of the solution from a coarse mesh to a newly defined finer mesh. For scalar variables, suitably high-order finite volume WENO methods can carry out such a prolongation. However, classes of PDEs, such as computational electrodynamics (CED) and magnetohydrodynamics (MHD), require that vector fields preserve a divergence constraint. The primal variables in such schemes consist of normal components of the vector field that are collocated at the faces of the mesh. As a result, the reconstruction and prolongation strategies for divergence constraint-preserving vector fields are necessarily more intricate. In this paper we present a fourth-order divergence constraint-preserving prolongation strategy that is analytically exact. Extension to higher orders using analytically exact methods is very challenging. To overcome that challenge, a novel WENO-like reconstruction strategy is invented that matches the moments of the vector field in the faces, where the vector field components are collocated. This approach is almost divergence constraint-preserving, therefore, we call it WENO-ADP. To make it exactly divergence constraint-preserving, a touch-up procedure is developed that is based on a constrained least squares (CLSQ) method for restoring the divergence constraint up to machine accuracy. With the touch-up, it is called WENO-ADPT. It is shown that refinement ratios of two and higher can be accommodated. An item of broader interest in this work is that we have also been able to invent very efficient finite volume WENO methods, where the coefficients are very easily obtained and the multidimensional smoothness indicators can be expressed as perfect squares. We demonstrate that the divergence constraint-preserving strategy works at several high orders for divergence-free vector fields as well as vector fields, where the divergence of the vector field has to match a charge density and its higher moments. We also show that our methods overcome the late time instability that has been known to plague adaptive computations in CED. 
    more » « less
  3. Many causal and policy effects of interest are defined by linear functionals of high-dimensional or non-parametric regression functions. Root-n consistent and asymptotically normal estimation of the object of interest requires debiasing to reduce the effects of regularization and/or model selection on the object of interest. Debiasing is typically achieved by adding a correction term to the plug-in estimator of the functional, which leads to properties such as semi-parametric efficiency, double robustness, and Neyman orthogonality. We implement an automatic debiasing procedure based on automatically learning the Riesz representation of the linear functional using Neural Nets and Random Forests. Our method only relies on black-box evaluation oracle access to the linear functional and does not require knowledge of its analytic form. We propose a multitasking Neural Net debiasing method with stochastic gradient descent minimization of a combined Riesz representer and regression loss, while sharing representation layers for the two functions. We also propose a Random Forest method which learns a locally linear representation of the Riesz function. Even though our method applies to arbitrary functionals, we experimentally find that it performs well compared to the state of art neural net based algorithm of Shi et al. (2019) for the case of the average treatment effect functional. We also evaluate our method on the problem of estimating average marginal effects with continuous treatments, using semi-synthetic data of gasoline price changes on gasoline demand. Code available at github.com/victor5as/RieszLearning. 
    more » « less
  4. Stereology-based methods provide the current state-of-the-art approaches for accurate quantification of numbers and other morphometric parameters of biological objects in stained tissue sections. The advent of artificial intelligence (AI)-based deep learning (DL) offers the possibility of improving throughput by automating the collection of stereology data. We have recently shown that DL can effectively achieve comparable accuracy to manual stereology but with higher repeatability, improved throughput, and less variation due to human factors by quantifying the total number of immunostained cells at their maximal profile of focus in extended depth of field (EDF) images. In the first of two novel contributions in this work, we propose a semi-automatic approach using a handcrafted Adaptive Segmentation Algorithm (ASA) to automatically generate ground truth on EDF images for training our deep learning (DL) models to automatically count cells using unbiased stereology methods. This update increases the amount of training data, thereby improving the accuracy and efficiency of automatic cell counting methods, without a requirement for extra expert time. The second contribution of this work is a Multi-channel Input and Multi-channel Output (MIMO) method using a U-Net deep learning architecture for automatic cell counting in a stack of z-axis images (also known as disector stacks). This DL-based digital automation of the ordinary optical fractionator ensures accurate counts through spatial separation of stained cells in the z-plane, thereby avoiding false negatives from overlapping cells in EDF images without the shortcomings of 3D and recurrent DL models. The contribution overcomes the issue of under-counting errors with EDF images due to overlapping cells in the z-plane (masking). We demonstrate the practical applications of these advances with automatic disector-based estimates of the total number of NeuN-immunostained neurons in a mouse neocortex. In summary, this work provides the first demonstration of automatic estimation of a total cell number in tissue sections using a combination of deep learning and the disector-based optical fractionator method. 
    more » « less
  5. This paper presents a method of tracking multiple ground targets from an unmanned aerial vehicle (UAV) in a 3D reference frame. The tracking method uses a monocular camera and makes no assumptions on the shape of the terrain or the target motion. The UAV runs two cascaded estimators. The first is an Extended Kalman Filter (EKF), which is responsible for tracking the UAV’s state, such as position and velocity relative to a fixed frame. The second estimator is an EKF that is responsible for estimating a fixed number of landmarks within the camera’s field of view. Landmarks are parameterized by a quaternion associated with bearing from the camera’s optical axis and an inverse distance parameter. The bearing quaternion allows for a minimal representation of each landmark’s direction and distance, a filter with no singularities, and a fast update rate due to few trigonometric functions. Three methods for estimating the ground target positions are demonstrated: the first uses the landmark estimator directly on the targets, the second computes the target depth with a weighted average of converged landmark depths, and the third extends the target’s measured bearing vector to intersect a ground plane approximated from the landmark estimates. Simulation results show that the third target estimation method yields the most accurate results. 
    more » « less