skip to main content


Title: Rigid registration algorithm based on the minimization of the total variation of the difference map
Image registration is broadly used in various scenarios in which similar scenes in different images are to be aligned. However, image registration becomes challenging when the contrasts and backgrounds in the images are vastly different. This work proposes using the total variation of the difference map between two images (TVDM) as a dissimilarity metric in rigid registration. A method based on TVDM minimization is implemented for image rigid registration. The method is tested with both synthesized and real experimental data that have various noise and background conditions. The performance of the proposed method is compared with the results of other rigid registration methods. It is demonstrated that the proposed method is highly accurate and robust and outperforms other methods in all of the tests. The new algorithm provides a robust option for image registrations that are critical to many nano-scale X-ray imaging and microscopy applications.  more » « less
Award ID(s):
1832613
NSF-PAR ID:
10353769
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Journal of Synchrotron Radiation
Volume:
29
Issue:
4
ISSN:
1600-5775
Page Range / eLocation ID:
1085 to 1094
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Combining a hyperspectral (HS) image and a multi-spectral (MS) image---an example of image fusion---can result in a spatially and spectrally high-resolution image. Despite the plethora of fusion algorithms in remote sensing, a necessary prerequisite, namely registration, is mostly ignored. This limits their application to well-registered images from the same source. In this article, we propose and validate an integrated registration and fusion approach (code available at https://github.com/zhouyuanzxcv/Hyperspectral). The registration algorithm minimizes a least-squares (LSQ) objective function with the point spread function (PSF) incorporated together with a nonrigid freeform transformation applied to the HS image and a rigid transformation applied to the MS image. It can handle images with significant scale differences and spatial distortion. The fusion algorithm takes the full high-resolution HS image as an unknown in the objective function. Assuming that the pixels lie on a low-dimensional manifold invariant to local linear transformations from spectral degradation, the fusion optimization problem leads to a closed-form solution. The method was validated on the Pavia University, Salton Sea, and the Mississippi Gulfport datasets. When the proposed registration algorithm is compared to its rigid variant and two mutual information-based methods, it has the best accuracy for both the nonrigid simulated dataset and the real dataset, with an average error less than 0.15 pixels for nonrigid distortion of maximum 1 HS pixel. When the fusion algorithm is compared with current state-of-the-art algorithms, it has the best performance on images with registration errors as well as on simulations that do not consider registration effects. 
    more » « less
  2. There have been great advances in bridge inspection damage detection involving the use of deep learning models. However, automated detection models currently fall short of giving an inspector an understanding of how the damage has progressed from one inspection to the next. The rate-of-change of the damage is a critical piece of information used by engineers to determine appropriate maintenance and rehabilitation actions to prevent structural failures. We propose a simple methodology for registering two bridge inspection videos or still images, collected at different stages of deterioration, so that trained model predictions may be directly measured and damage progression compared. The changes may be documented and presented to the inspector so that they may quickly evaluate key interest regions in the inspection video or image. Three approaches referred to as rigid, deformable, and hybrid image registration methods were experimentally tested and evaluated based on their ability to preserve the geometric characteristics of the referenced image. It was found in all experiments that the rigid, homography-based transformations performed the best for this application over a state-of-the-art deformable registration method, RANSAC-Flow.

     
    more » « less
  3. Abstract Solar images observed in different channels with different instruments are crucial to the study of solar activity. However, the images have different fields of view, causing them to be misaligned. It is essential to accurately register the images for studying solar activity from multiple perspectives. Image registration is described as an optimizing problem from an image to be registered to a reference image. In this paper, we proposed a novel coarse-to-fine solar image registration method to register the multichannel solar images. In the coarse registration step, we used the regular step gradient descent algorithm as an optimizer to maximize the normalized cross correlation metric. The fine registration step uses the Powell–Brent algorithms as an optimizer and brings the Mattes mutual information similarity metric to the minimum. We selected five pairs of images with different resolutions, rotation angles, and shifts to compare and evaluate our results to those obtained by scale-invariant feature transform and phase correlation. The images are observed by the 1.6 m Goode Solar Telescope at Big Bear Solar Observatory and the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory. Furthermore, we used the mutual information and registration time criteria to quantify the registration results. The results prove that the proposed method not only reaches better registration precision but also has better robustness. Meanwhile, we want to highlight that the method can also work well for the time-series solar image registration. 
    more » « less
  4. Purpose

    The ability to register image data to a common coordinate system is a critical feature of virtually all imaging studies. However, in spite of the abundance of literature on the subject and the existence of several variants of registration algorithms, their practical utility remains problematic, as commonly acknowledged even by developers of these methods.

    Methods

    A new registration method is presented that utilizes a Hamiltonian formalism and constructs registration as a sequence of symplectomorphic maps in conjunction with a novel phase space regularization. For validation of the framework a panel of deformations expressed in analytical form is developed that includes deformations based on known physical processes in MRI and reproduces various distortions and artifacts typically present in images collected using these different MRI modalities.

    Results

    The method is demonstrated on the three different magnetic resonance imaging (MRI) modalities by mapping between high resolution anatomical (HRA) volumes, medium resolution diffusion weighted MRI (DW‐MRI) and HRA volumes, and low resolution functional MRI (fMRI) and HRA volumes.

    Conclusions

    The method has shown an excellent performance and the panel of deformations was instrumental to quantify its repeatability and reproducibility in comparison to several available alternative approaches.

     
    more » « less
  5. Abstract Background

    Lung cancer is the deadliest and second most common cancer in the United States due to the lack of symptoms for early diagnosis. Pulmonary nodules are small abnormal regions that can be potentially correlated to the occurrence of lung cancer. Early detection of these nodules is critical because it can significantly improve the patient's survival rates. Thoracic thin‐sliced computed tomography (CT) scanning has emerged as a widely used method for diagnosing and prognosis lung abnormalities.

    Purpose

    The standard clinical workflow of detecting pulmonary nodules relies on radiologists to analyze CT images to assess the risk factors of cancerous nodules. However, this approach can be error‐prone due to the various nodule formation causes, such as pollutants and infections. Deep learning (DL) algorithms have recently demonstrated remarkable success in medical image classification and segmentation. As an ever more important assistant to radiologists in nodule detection, it is imperative ensure the DL algorithm and radiologist to better understand the decisions from each other. This study aims to develop a framework integrating explainable AI methods to achieve accurate pulmonary nodule detection.

    Methods

    A robust and explainable detection (RXD) framework is proposed, focusing on reducing false positives in pulmonary nodule detection. Its implementation is based on an explanation supervision method, which uses nodule contours of radiologists as supervision signals to force the model to learn nodule morphologies, enabling improved learning ability on small dataset, and enable small dataset learning ability. In addition, two imputation methods are applied to the nodule region annotations to reduce the noise within human annotations and allow the model to have robust attributions that meet human expectations. The 480, 265, and 265 CT image sets from the public Lung Image Database Consortium and Image Database Resource Initiative (LIDC‐IDRI) dataset are used for training, validation, and testing.

    Results

    Using only 10, 30, 50, and 100 training samples sequentially, our method constantly improves the classification performance and explanation quality of baseline in terms of Area Under the Curve (AUC) and Intersection over Union (IoU). In particular, our framework with a learnable imputation kernel improves IoU from baseline by 24.0% to 80.0%. A pre‐defined Gaussian imputation kernel achieves an even greater improvement, from 38.4% to 118.8% from baseline. Compared to the baseline trained on 100 samples, our method shows less drop in AUC when trained on fewer samples. A comprehensive comparison of interpretability shows that our method aligns better with expert opinions.

    Conclusions

    A pulmonary nodule detection framework was demonstrated using public thoracic CT image datasets. The framework integrates the robust explanation supervision (RES) technique to ensure the performance of nodule classification and morphology. The method can reduce the workload of radiologists and enable them to focus on the diagnosis and prognosis of the potential cancerous pulmonary nodules at the early stage to improve the outcomes for lung cancer patients.

     
    more » « less