skip to main content


This content will become publicly available on June 28, 2024

Title: Difference of anisotropic and isotropic TV for segmentation under blur and Poisson noise
In this paper, we aim to segment an image degraded by blur and Poisson noise. We adopt a smoothing-and-thresholding (SaT) segmentation framework that finds a piecewise-smooth solution, followed by k -means clustering to segment the image. Specifically for the image smoothing step, we replace the least-squares fidelity for Gaussian noise in the Mumford-Shah model with a maximum posterior (MAP) term to deal with Poisson noise and we incorporate the weighted difference of anisotropic and isotropic total variation (AITV) as a regularization to promote the sparsity of image gradients. For such a nonconvex model, we develop a specific splitting scheme and utilize a proximal operator to apply the alternating direction method of multipliers (ADMM). Convergence analysis is provided to validate the efficacy of the ADMM scheme. Numerical experiments on various segmentation scenarios (grayscale/color and multiphase) showcase that our proposed method outperforms a number of segmentation methods, including the original SaT.  more » « less
Award ID(s):
2151235 1952644 1854434
NSF-PAR ID:
10440861
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Frontiers in Computer Science
Volume:
5
ISSN:
2624-9898
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In a class of piecewise-constant image segmentation models, we propose to incorporate a weighted difference of anisotropic and isotropic total variation (AITV) to regularize the partition boundaries in an image. In particular, we replace the total variation regularization in the Chan--Vese segmentation model and a fuzzy region competition model by the proposed AITV. To deal with the nonconvex nature of AITV, we apply the difference-of-convex algorithm (DCA), in which the subproblems can be minimized by the primal-dual hybrid gradient method with linesearch. The convergence of the DCA scheme is analyzed. In addition, a generalization to color image segmentation is discussed. In the numerical experiments, we compare the proposed models with the classic convex approaches and the two-stage segmentation methods (smoothing and then thresholding) on various images, showing that our models are effective in image segmentation and robust with respect to impulsive noises. 
    more » « less
  2. This paper introduces a new image smoothing filter based on a feed-forward convolutional neural network (CNN) in presence of impulse noise. This smoothing filter integrates a very deep architecture, a regularization method, and a batch normalization process. This fully integrated approach yields an effectively denoised and smoothed image yielding a high similarity measure with the original noise free image. Specific structural metrics are used to assess the denoising process and how effective was the removal of the impulse noise. This CNN model can also deal with other noise levels not seen during the training phase. The proposed CNN model is constructed through a 20-layer network using 400 images from the Berkeley Segmentation Dataset (BSD) in the training phase. Results are obtained using the standard testing set of 8 natural images not seen in the training phase. The merits of this proposed method are weighed in terms of high similarity measure and structural metrics that conform to the original image and compare favorably to the different results obtained using state-of-art denoising filters. 
    more » « less
  3. Iron overload, a complication of repeated blood transfusions, can cause tissue damage and organ failure. The body has no regulatory mechanism to excrete excess iron, so iron overload must be closely monitored to guide therapy and measure treatment response. The concentration of iron in the liver is a reliable marker for total body iron content and is now measured noninvasively with magnetic resonance imaging (MRI). MRI produces a diagnostic image by measuring the signals emitted from the body in the presence of a constant magnetic field and radiofrequency pulses. At each pixel, the signal decay constant, T2*, can be calculated, providing insight about the structure of each tissue. Liver iron content can be quantified based on this T2* value because signal decay accelerates with increasing iron concentration. We developed a method to automatically segment the liver from the MRI image to accurately calculate iron content. Our current algorithm utilizes the active contour model for image segmentation, which iteratively evolves a curve until it reaches an edge or a boundary. We applied this algorithm to each MRI image in addition to a map of pixelwise T2* values, combining basic image processing with imaging physics. One of the limitations of this segmentation model is how it handles noise in the MRI data. Recent advancements in deep learning have enabled researchers to utilize convolutional neural networks to denoise and reconstruct images. We used the Trainable Nonlinear Reaction Diffusion network architecture to denoise the MRI images, allowing for smoother segmentation while preserving fine details. 
    more » « less
  4. Abstract

    We propose a regularization-based deblurring method that works efficiently for galaxy images. The spatial resolution of a ground-based telescope is generally limited by seeing conditions and is much worse than space-based telescopes. This circumstance has generated considerable research interest in the restoration of spatial resolution. Since image deblurring is a typical inverse problem and often ill-posed, solutions tend to be unstable. To obtain a stable solution, much research has adopted regularization-based methods for image deblurring, but the regularization term is not necessarily appropriate for galaxy images. Although galaxies have an exponential or Sérsic profile, the conventional regularization assumes the image profiles to behave linearly in space. The significant deviation between the assumption and real situations leads to blurring of the images and smoothing out the detailed structures. Clearly, regularization on logarithmic domain, i.e., magnitude domain, should provide a more appropriate assumption, which we explore in this study. We formulate a problem of deblurring galaxy images by an objective function with a Tikhonov regularization term on a magnitude domain. We introduce an iterative algorithm minimizing the objective function with a primal–dual splitting method. We investigate the feasibility of the proposed method using simulation and observation images. In the simulation, we blur galaxy images with a realistic point spread function and add both Gaussian and Poisson noise. For the evaluation with the observed images, we use galaxy images taken by the Subaru HSC-SSP. Both of these evaluations show that our method successfully recovers the spatial resolution of the deblurred images and significantly outperforms the conventional methods. The code is publicly available from the GitHub 〈https://github.com/kzmurata-astro/PSFdeconv_amag〉.

     
    more » « less
  5. We introduce a general tensor model suitable for data analytic tasks for heterogeneous datasets, wherein there are joint low-rank structures within groups of observations, but also discriminative structures across different groups. To capture such complex structures, a double core tensor (DCOT) factorization model is introduced together with a family of smoothing loss functions. By leveraging the proposed smoothing function, the model accurately estimates the model factors, even in the presence of missing entries. A linearized ADMM method is employed to solve regularized versions of DCOT factorizations, that avoid large tensor operations and large memory storage requirements. Further, we establish theoretically its global convergence, together with consistency of the estimates of the model parameters. The effectiveness of the DCOT model is illustrated on several realworld examples including image completion, recommender systems, subspace clustering, and detecting modules in heterogeneous Omics multi-modal data, since it provides more insightful decompositions than conventional tensor methods. 
    more » « less