skip to main content


Title: A Weighted Difference of Anisotropic and Isotropic Total Variation for Relaxed Mumford--Shah Color and Multiphase Image Segmentation
In a class of piecewise-constant image segmentation models, we propose to incorporate a weighted difference of anisotropic and isotropic total variation (AITV) to regularize the partition boundaries in an image. In particular, we replace the total variation regularization in the Chan--Vese segmentation model and a fuzzy region competition model by the proposed AITV. To deal with the nonconvex nature of AITV, we apply the difference-of-convex algorithm (DCA), in which the subproblems can be minimized by the primal-dual hybrid gradient method with linesearch. The convergence of the DCA scheme is analyzed. In addition, a generalization to color image segmentation is discussed. In the numerical experiments, we compare the proposed models with the classic convex approaches and the two-stage segmentation methods (smoothing and then thresholding) on various images, showing that our models are effective in image segmentation and robust with respect to impulsive noises.  more » « less
Award ID(s):
1854434 1952644
NSF-PAR ID:
10335364
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
SIAM journal on imaging sciences
Volume:
14
Issue:
3
ISSN:
1936-4954
Page Range / eLocation ID:
1078--1113
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we aim to segment an image degraded by blur and Poisson noise. We adopt a smoothing-and-thresholding (SaT) segmentation framework that finds a piecewise-smooth solution, followed by k -means clustering to segment the image. Specifically for the image smoothing step, we replace the least-squares fidelity for Gaussian noise in the Mumford-Shah model with a maximum posterior (MAP) term to deal with Poisson noise and we incorporate the weighted difference of anisotropic and isotropic total variation (AITV) as a regularization to promote the sparsity of image gradients. For such a nonconvex model, we develop a specific splitting scheme and utilize a proximal operator to apply the alternating direction method of multipliers (ADMM). Convergence analysis is provided to validate the efficacy of the ADMM scheme. Numerical experiments on various segmentation scenarios (grayscale/color and multiphase) showcase that our proposed method outperforms a number of segmentation methods, including the original SaT. 
    more » « less
  2. null (Ed.)
    Sparsity promoting functions (SPFs) are commonly used in optimization problems to find solutions which are sparse in some basis. For example, the [Formula: see text]-regularized wavelet model and the Rudin–Osher–Fatemi total variation (ROF-TV) model are some of the most well-known models for signal and image denoising, respectively. However, recent work demonstrates that convexity is not always desirable in SPFs. In this paper, we replace convex SPFs with their induced nonconvex SPFs and develop algorithms for the resulting model by exploring the intrinsic structures of the nonconvex SPFs. These functions are defined as the difference of the convex SPF and its Moreau envelope. We also present simulations illustrating the performance of a special SPF and the developed algorithms in image denoising. 
    more » « less
  3. null (Ed.)
    Abstract Background This study outlines an image processing algorithm for accurate and consistent lung segmentation in chest radiographs of critically ill adults and children typically obscured by medical equipment. In particular, this work focuses on applications in analysis of acute respiratory distress syndrome – a critical illness with a mortality rate of 40% that affects 200,000 patients in the United States and 3 million globally each year. Methods Chest radiographs were obtained from critically ill adults (n = 100), adults diagnosed with acute respiratory distress syndrome (ARDS) (n = 25), and children (n = 100) hospitalized at Michigan Medicine. Physicians annotated the lung field of each radiograph to establish the ground truth. A Total Variation-based Active Contour (TVAC) lung segmentation algorithm was developed and compared to multiple state-of-the-art methods including a deep learning model (U-Net), a random walker algorithm, and an active spline model, using the Sørensen–Dice coefficient to measure segmentation accuracy. Results The TVAC algorithm accurately segmented lung fields in all patients in the study. For the adult cohort, an averaged Dice coefficient of 0.86 ±0.04 (min: 0.76) was reported for TVAC, 0.89 ±0.12 (min: 0.01) for U-Net, 0.74 ±0.19 (min: 0.15) for the random walker algorithm, and 0.64 ±0.17 (min: 0.20) for the active spline model. For the pediatric cohort, a Dice coefficient of 0.85 ±0.04 (min: 0.75) was reported for TVAC, 0.87 ±0.09 (min: 0.56) for U-Net, 0.67 ±0.18 (min: 0.18) for the random walker algorithm, and 0.61 ±0.18 (min: 0.18) for the active spline model. Conclusion The proposed algorithm demonstrates the most consistent performance of all segmentation methods tested. These results suggest that TVAC can accurately identify lung fields in chest radiographs in critically ill adults and children. 
    more » « less
  4. Recently, with the advent of the Internet of everything and 5G network, the amount of data generated by various edge scenarios such as autonomous vehicles, smart industry, 4K/8K, virtual reality (VR), augmented reality (AR), etc., has greatly exploded. All these trends significantly brought real-time, hardware dependence, low power consumption, and security requirements to the facilities, and rapidly popularized edge computing. Meanwhile, artificial intelligence (AI) workloads also changed the computing paradigm from cloud services to mobile applications dramatically. Different from wide deployment and sufficient study of AI in the cloud or mobile platforms, AI workload performance and their resource impact on edges have not been well understood yet. There lacks an in-depth analysis and comparison of their advantages, limitations, performance, and resource consumptions in an edge environment. In this paper, we perform a comprehensive study of representative AI workloads on edge platforms. We first conduct a summary of modern edge hardware and popular AI workloads. Then we quantitatively evaluate three categories (i.e., classification, image-to-image, and segmentation) of the most popular and widely used AI applications in realistic edge environments based on Raspberry Pi, Nvidia TX2, etc. We find that interaction between hardware and neural network models incurs non-negligible impact and overhead on AI workloads at edges. Our experiments show that performance variation and difference in resource footprint limit availability of certain types of workloads and their algorithms for edge platforms, and users need to select appropriate workload, model, and algorithm based on requirements and characteristics of edge environments. 
    more » « less
  5. We develop a framework for reconstructing images that are sparse in an appropriate transform domain from polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and incident-energy spectrum are unknown. Assuming that the object that we wish to reconstruct consists of a single material, we obtain a parsimonious measurement-model parameterization by changing the integral variable from photon energy to mass attenuation, which allows us to combine the variations brought by the unknown incident spectrum and mass attenuation into a single unknown mass-attenuation spectrum function; the resulting measurement equation has the Laplace-integral form. The mass-attenuation spectrum is then expanded into basis functions using B splines of order one. We consider a Poisson noise model and establish conditions for biconvexity of the corresponding negative log-likelihood (NLL) function with respect to the density-map and mass-attenuation spectrum parameters. We derive a block-coordinate descent algorithm for constrained minimization of a penalized NLL objective function, where penalty terms ensure nonnegativity of the mass-attenuation spline coefficients and nonnegativity and gradient-map sparsity of the density-map image, imposed using a convex total-variation (TV) norm; the resulting objective function is biconvex. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) iteration for updating the image and mass-attenuation spectrum parameters, respectively. We prove the Kurdyka-Łojasiewicz property of the objective function, which is important for establishing local convergence of block-coordinate descent schemes in biconvex optimization problems. Our framework applies to other NLLs and signal-sparsity penalties, such as lognormal NLL and ℓ₁ norm of 2D discrete wavelet transform (DWT) image coefficients. Numerical experiments with simulated and real X-ray CT data demonstrate the performance of the proposed scheme. 
    more » « less