skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


Title: Lane Marking Verification for High Definition Map Maintenance Using Crowdsourced Images
Autonomous vehicles often rely on high-definition (HD) maps to navigate around. However, lane markings (LMs) are not necessarily static objects due to wear \& tear from usage and road reconstruction \& maintenance. Therefore, the wrong matching between LMs in the HD map and sensor readings may lead to erroneous localization or even cause traffic accidents. It is imperative to keep LMs up-to-date. However, frequently recollecting data with dedicated hardware and specialists to update HD maps is not only cost-prohibitive but also unviable. Here we propose to utilize crowdsourced images from multiple vehicles at different times to help verify LMs for HD map maintenance. We obtain the LM distribution in the image space by considering the camera pose uncertainty in perspective projection. Both LMs in HD map and LMs in the image are treated as observations of LM distributions which allow us to construct posterior conditional distribution (a.k.a Bayesian belief functions) of LMs from either sources. An LM is consistent if belief functions from the map and the image satisfy statistical hypothesis testing. We further extend the Bayesian belief model into a sequential belief update using crowdsourced images. LMs with a higher probability of existence are kept in the HD map whereas those with a lower probability of existence are removed from the HD map. We verify our approach using real data. Experimental results show that our method is capable of verifying and updating LMs in the HD map.  more » « less
Award ID(s):
1925037
NSF-PAR ID:
10206854
Author(s) / Creator(s):
Date Published:
Journal Name:
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, Oct. 25-29, 2020
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Modern language models have the capacity to store and use immense amounts of knowledge about real-world entities, but it remains unclear how to update such knowledge stored in model parameters. While prior methods for updating knowledge in LMs successfully inject atomic facts, updated LMs fail to make inferences based on injected facts. In this work, we demonstrate that a context distillation-based approach can both impart knowledge about entities and propagate that knowledge to enable broader inferences. Our approach consists of two stages: transfer set generation and distillation on the transfer set. We first generate a transfer set by prompting a language model to generate continuations from the entity definition. Then, we update the model parameters so that the distribution of the LM (the student) matches the distribution of the LM conditioned on the definition (the teacher) on the transfer set. Our experiments demonstrate that this approach is more effective at propagating knowledge updates than fine-tuning and other gradient-based knowledge-editing methods. Moreover, it does not compromise performance in other contexts, even when injecting the definitions of up to 150 entities at once. 
    more » « less
  2. Abstract

    The key to detecting neutral hydrogen during the epoch of reionization (EoR) is to separate the cosmological signal from the dominating foreground radiation. We developed direct optimal mapping (DOM) to map interferometric visibilities; it contains only linear operations, with full knowledge of point spread functions from visibilities to images. Here, we demonstrate a fast Fourier transform-based image power spectrum and its window functions computed from the DOM images. We use noiseless simulation, based on the Hydrogen Epoch of Reionization Array Phase I configuration, to study the image power spectrum properties. The window functions show <10−11of the integrated power leaks from the foreground-dominated region into the EoR window; the 2D and 1D power spectra also verify the separation between the foregrounds and the EoR.

     
    more » « less
  3. ABSTRACT

    We present KaRMMa, a novel method for performing mass map reconstruction from weak-lensing surveys. We employ a fully Bayesian approach with a physically motivated lognormal prior to sample from the posterior distribution of convergence maps. We test KaRMMa on a suite of dark matter N-body simulations with simulated DES Y1-like shear observations. We show that KaRMMa outperforms the basic Kaiser–Squires mass map reconstruction in two key ways: (1) our best map point estimate has lower residuals compared to Kaiser–Squires; and (2) unlike the Kaiser–Squires reconstruction, the posterior distribution of KaRMMa maps is nearly unbiased in all summary statistics we considered, namely: one-point and two-point functions, and peak/void counts. In particular, KaRMMa successfully captures the non-Gaussian nature of the distribution of κ values in the simulated maps. We further demonstrate that the KaRMMa posteriors correctly characterize the uncertainty in all summary statistics we considered.

     
    more » « less
  4. null (Ed.)
    The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG. Here we propose a new model, QA-GNN, which addresses the above challenges through two key innovations: (i) relevance scoring, where we use LMs to estimate the importance of KG nodes relative to the given QA context, and (ii) joint reasoning, where we connect the QA context and KG to form a joint graph, and mutually update their representations through graph-based message passing. We evaluate QA-GNN on the CommonsenseQA and OpenBookQA datasets, and show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning, e.g., correctly handling negation in questions. 
    more » « less
  5. null (Ed.)
    Abstract Background Cryo-EM data generated by electron tomography (ET) contains images for individual protein particles in different orientations and tilted angles. Individual cryo-EM particles can be aligned to reconstruct a 3D density map of a protein structure. However, low contrast and high noise in particle images make it challenging to build 3D density maps at intermediate to high resolution (1–3 Å). To overcome this problem, we propose a fully automated cryo-EM 3D density map reconstruction approach based on deep learning particle picking. Results A perfect 2D particle mask is fully automatically generated for every single particle. Then, it uses a computer vision image alignment algorithm (image registration) to fully automatically align the particle masks. It calculates the difference of the particle image orientation angles to align the original particle image. Finally, it reconstructs a localized 3D density map between every two single-particle images that have the largest number of corresponding features. The localized 3D density maps are then averaged to reconstruct a final 3D density map. The constructed 3D density map results illustrate the potential to determine the structures of the molecules using a few samples of good particles. Also, using the localized particle samples (with no background) to generate the localized 3D density maps can improve the process of the resolution evaluation in experimental maps of cryo-EM. Tested on two widely used datasets, Auto3DCryoMap is able to reconstruct good 3D density maps using only a few thousand protein particle images, which is much smaller than hundreds of thousands of particles required by the existing methods. Conclusions We design a fully automated approach for cryo-EM 3D density maps reconstruction (Auto3DCryoMap). Instead of increasing the signal-to-noise ratio by using 2D class averaging, our approach uses 2D particle masks to produce locally aligned particle images. Auto3DCryoMap is able to accurately align structural particle shapes. Also, it is able to construct a decent 3D density map from only a few thousand aligned particle images while the existing tools require hundreds of thousands of particle images. Finally, by using the pre-processed particle images,Auto3DCryoMap reconstructs a better 3D density map than using the original particle images. 
    more » « less