skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Distributed and consistent multi-image feature matching via QuickMatch
In this work, we consider the multi-image object matching problem in distributed networks of robots. Multi-image feature matching is a keystone of many applications, including Simultaneous Localization and Mapping, homography, object detection, and Structure from Motion. We first review the QuickMatch algorithm for multi-image feature matching. We then present NetMatch, an algorithm for distributing sets of features across computational units (agents) that largely preserves feature match quality and minimizes communication between agents (avoiding, in particular, the need to flood all data to all agents). Finally, we present an experimental application of both QuickMatch and NetMatch on an object matching test with low-quality images. The QuickMatch and NetMatch algorithms are compared with other standard matching algorithms in terms of preservation of match consistency. Our experiments show that QuickMatch and Netmatch can scale to larger numbers of images and features, and match more accurately than standard techniques.  more » « less
Award ID(s):
1717656
PAR ID:
10547324
Author(s) / Creator(s):
 ;  ;  ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
The International Journal of Robotics Research
Volume:
39
Issue:
10-11
ISSN:
0278-3649
Format(s):
Medium: X Size: p. 1222-1238
Size(s):
p. 1222-1238
Sponsoring Org:
National Science Foundation
More Like this
  1. In this work, we present a novel solution and experimental verification for the multi-image object matching problem. We first review the QuickMatch algorithm for multi-image feature matching and then show how it applies to an object matching test case. The presented experiment looks to match features across a large number of images and features more often and accurately than standard techniques. This experiment demonstrates the advantages of rapid multi-image matching, not only for improving existing algorithms, but also for use in new applications, such as object discovery and localization. 
    more » « less
  2. null; null (Ed.)
    Biodiversity image repositories are crucial sources of training data for machine learning approaches to biological research. Metadata, specifically metadata about object quality, is putatively an important prerequisite to selecting sample subsets for these experiments. This study demonstrates the importance of image quality metadata to a species classification experiment involving a corpus of 1935 fish specimen images which were annotated with 22 metadata quality properties. A small subset of high quality images produced an F1 accuracy of 0.41 compared to 0.35 for a taxonomically matched subset of low quality images when used by a convolutional neural network approach to species identification. Using the full corpus of images revealed that image quality differed between correctly classified and misclassified images. We found the visibility of all anatomical features was the most important quality feature for classification accuracy. We suggest biodiversity image repositories consider adopting a minimal set of image quality metadata to support future machine learning projects. 
    more » « less
  3. Underwater image enhancement is often perceived as a disadvantageous process to object detection. We propose a novel analysis of the interactions between enhancement and detection, elaborating on the potential of enhancement to improve detection. In particular, we evaluate object detection performance for each individual image rather than across the entire set to allow a direct performance comparison of each image before and after enhancement. This approach enables the generation of unique queries to identify the outperforming and underperforming enhanced images compared to the original images. To accomplish this, we first produce enhanced image sets of the original images using recent image enhancement models. Each enhanced set is then divided into two groups: (1) images that outperform or match the performance of the original images and (2) images that underperform. Subsequently, we create mixed original-enhanced sets by replacing underperforming enhanced images with their corresponding original images. Next, we conduct a detailed analysis by evaluating all generated groups for quality and detection performance attributes. Finally, we perform an overlap analysis between the generated enhanced sets to identify cases where the enhanced images of different enhancement algorithms unanimously outperform, equally perform, or underperform the original images. Our analysis reveals that, when evaluated individually, most enhanced images achieve equal or superior performance compared to their original counterparts. The proposed method uncovers variations in detection performance that are not apparent in a whole set as opposed to a per-image evaluation because the latter reveals that only a small percentage of enhanced images cause an overall negative impact on detection. We also find that over-enhancement may lead to deteriorated object detection performance. Lastly, we note that enhanced images reveal hidden objects that were not annotated due to the low visibility of the original images. 
    more » « less
  4. We present MixNMatch, a conditional generative model that learns to disentangle and encode background, object pose, shape, and texture from real images with minimal supervision, for mix-and-match image generation. We build upon FineGAN, an unconditional generative model, to learn the desired disentanglement and image generator, and leverage adversarial joint image-code distribution matching to learn the latent factor encoders. MixNMatch requires bounding boxes during training to model background, but requires no other supervision. Through extensive experiments, we demonstrate MixNMatch's ability to accurately disentangle, encode, and combine multiple factors for mix-and-match image generation, including sketch2color, cartoon2img, and img2gif applications. Our code/models/demo can be found at https://github.com/Yuheng-Li/MixNMatch 
    more » « less
  5. null (Ed.)
    Online social networks provide a convenient platform for the spread of rumors, which could lead to serious aftermaths such as economic losses and public panic. The classical rumor blocking problem aims to launch a set of nodes as a positive cascade to compete with misinformation in order to limit the spread of rumors. However, most of the related researches were based on a one-dimensional diffusion model. In reality, there is more than one feature associated with an object. A user’s impression on this object is determined not just by one feature but by her overall evaluation of all features associated with it. Thus, the influence spread of this object can be decomposed into the spread of multiple features. Based on that, we design a multi-feature diffusion model (MF-model) in this paper and formulate a multi-feature rumor blocking (MFRB) problem on a multi-layer network structure according to this model. To solve the MFRB problem, we design a creative sampling method called Multi-Sampling, which can be applied to this multi-layer network structure. Then, we propose a Revised-IMM algorithm and obtain a satisfactory approximate solution to MFRB. Finally, we evaluate our proposed algorithm by conducting experiments on real datasets, which shows the effectiveness of our Revised- IMM and its advantage to their baseline algorithms. 
    more » « less