Sparsity finds applications in diverse areas such as statistics, machine learning, and signal processing. Computations over sparse structures are less complex compared to their dense counterparts and need less storage. This paper proposes a heuristic method for retrieving sparse approximate solutions of optimization problems via minimizing the
Breast cancer has emerged as the most life-threatening disease among women around the world. Early detection and treatment of breast cancer are thought to reduce the need for surgery and boost the survival rate. The Magnetic Resonance Imaging (MRI) segmentation techniques for breast cancer diagnosis are investigated in this article. Kapur’s entropy-based multilevel thresholding is used in this study to determine optimal values for breast DCE-MRI lesion segmentation using Gorilla Troops Optimization (GTO). An improved GTO, is developed by incorporating Rotational opposition based-learning (RBL) into GTO called (GTORBL) and applied it to the same problem. The proposed approaches are tested on 20 patients’ T2 Weighted Sagittal (T2 WS) DCE-MRI 100 slices. The proposed approaches are compared with Tunicate Swarm Algorithm (TSA), Particle Swarm Optimization (PSO), Arithmetic Optimization Algorithm (AOA), Slime Mould Algorithm (SMA), Multi-verse Optimization (MVO), Hidden Markov Random Field (HMRF), Improved Markov Random Field (IMRF), and Conventional Markov Random Field (CMRF). The Dice Similarity Coefficient (DSC), sensitivity, and accuracy of the proposed GTO-based approach is achieved
- NSF-PAR ID:
- 10432932
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- Scientific Reports
- Volume:
- 13
- Issue:
- 1
- ISSN:
- 2045-2322
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract quasi-norm, where$$\ell _{p}$$ . An iterative two-block algorithm for minimizing the$$0 quasi-norm subject to convex constraints is proposed. The proposed algorithm requires solving for the roots of a scalar degree polynomial as opposed to applying a soft thresholding operator in the case of$$\ell _{p}$$ norm minimization. The algorithm’s merit relies on its ability to solve the$$\ell _{1}$$ quasi-norm minimization subject to any convex constraints set. For the specific case of constraints defined by differentiable functions with Lipschitz continuous gradient, a second, faster algorithm is proposed. Using a proximal gradient step, we mitigate the convex projection step and hence enhance the algorithm’s speed while proving its convergence. We present various applications where the proposed algorithm excels, namely, sparse signal reconstruction, system identification, and matrix completion. The results demonstrate the significant gains obtained by the proposed algorithm compared to other$$\ell _{p}$$ quasi-norm based methods presented in previous literature.$$\ell _{p}$$ -
Abstract For polyhedral constrained optimization problems and a feasible point
, it is shown that the projection of the negative gradient on the tangent cone, denoted$$\textbf{x}$$ , has an orthogonal decomposition of the form$$\nabla _\varOmega f(\textbf{x})$$ . At a stationary point,$$\varvec{\beta }(\textbf{x}) + \varvec{\varphi }(\textbf{x})$$ so$$\nabla _\varOmega f(\textbf{x}) = \textbf{0}$$ reflects the distance to a stationary point. Away from a stationary point,$$\Vert \nabla _\varOmega f(\textbf{x})\Vert $$ and$$\Vert \varvec{\beta }(\textbf{x})\Vert $$ measure different aspects of optimality since$$\Vert \varvec{\varphi }(\textbf{x})\Vert $$ only vanishes when the KKT multipliers at$$\varvec{\beta }(\textbf{x})$$ have the correct sign, while$$\textbf{x}$$ only vanishes when$$\varvec{\varphi }(\textbf{x})$$ is a stationary point in the active manifold. As an application of the theory, an active set algorithm is developed for convex quadratic programs which adapts the flow of the algorithm based on a comparison between$$\textbf{x}$$ and$$\Vert \varvec{\beta }(\textbf{x})\Vert $$ .$$\Vert \varvec{\varphi }(\textbf{x})\Vert $$ -
Abstract We establish rapid mixing of the random-cluster Glauber dynamics on random
-regular graphs for all$$\varDelta $$ and$$q\ge 1$$ , where the threshold$$p corresponds to a uniqueness/non-uniqueness phase transition for the random-cluster model on the (infinite)$$p_u(q,\varDelta )$$ -regular tree. It is expected that this threshold is sharp, and for$$\varDelta $$ the Glauber dynamics on random$$q>2$$ -regular graphs undergoes an exponential slowdown at$$\varDelta $$ . More precisely, we show that for every$$p_u(q,\varDelta )$$ ,$$q\ge 1$$ , and$$\varDelta \ge 3$$ , with probability$$p over the choice of a random$$1-o(1)$$ -regular graph on$$\varDelta $$ n vertices, the Glauber dynamics for the random-cluster model has mixing time. As a corollary, we deduce fast mixing of the Swendsen–Wang dynamics for the Potts model on random$$\varTheta (n \log n)$$ -regular graphs for every$$\varDelta $$ , in the tree uniqueness region. Our proof relies on a sharp bound on the “shattering time”, i.e., the number of steps required to break up any configuration into$$q\ge 2$$ sized clusters. This is established by analyzing a delicate and novel iterative scheme to simultaneously reveal the underlying random graph with clusters of the Glauber dynamics configuration on it, at a given time.$$O(\log n)$$ -
Abstract We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution
and the corresponding noiseless output distribution$$p_{\text {noisy}}$$ shrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmark$$p_{\text {ideal}}$$ F that measures this correlation behaves as , where$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$ is the probability of error per circuit location and$$\epsilon $$ s is the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution and the uniform distribution$$p_{\text {noisy}}$$ decays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {unif}}$$ . In other words, although at least one local error occurs with probability$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$ , the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$1-F$$ . Thus, the “white-noise approximation” is meaningful when$$O(F\epsilon \sqrt{s})$$ , a quadratically weaker condition than the$$\epsilon \sqrt{s} \ll 1$$ requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$\epsilon s\ll 1$$ , which corresponds to only$$s \ge \Omega (n\log (n))$$ logarithmic depth circuits, and if, additionally, the inverse error rate satisfies , which is needed to ensure errors are scrambled faster than$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ F decays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds. -
Abstract Consider two half-spaces
and$$H_1^+$$ in$$H_2^+$$ whose bounding hyperplanes$${\mathbb {R}}^{d+1}$$ and$$H_1$$ are orthogonal and pass through the origin. The intersection$$H_2$$ is a spherical convex subset of the$${\mathbb {S}}_{2,+}^d:={\mathbb {S}}^d\cap H_1^+\cap H_2^+$$ d -dimensional unit sphere , which contains a great subsphere of dimension$${\mathbb {S}}^d$$ and is called a spherical wedge. Choose$$d-2$$ n independent random points uniformly at random on and consider the expected facet number of the spherical convex hull of these points. It is shown that, up to terms of lower order, this expectation grows like a constant multiple of$${\mathbb {S}}_{2,+}^d$$ . A similar behaviour is obtained for the expected facet number of a homogeneous Poisson point process on$$\log n$$ . The result is compared to the corresponding behaviour of classical Euclidean random polytopes and of spherical random polytopes on a half-sphere.$${\mathbb {S}}_{2,+}^d$$