(Early Access) Effective tissue clutter filtering is critical for non-contrast ultrasound imaging of slow blood flow in small vessels. Independent component analysis (ICA) has been considered by other groups for ultrasound clutter filtering in the past and was shown to be superior to principal component analysis (PCA)-based methods. However, it has not been considered specifically for slow flow applications or revisited since the onset of other slow flow-focused advancements in beamforming and tissue filtering, namely angled plane wave beamforming and full spatiotemporal singular value decomposition (SVD) (i.e., PCA-based) tissue filtering. In this work, we aim to develop a full spatiotemporal ICA-based tissue filtering technique facilitated by plane wave applications and compare it to SVD filtering. We compare ICA and SVD filtering in terms of optimal image quality in simulations and phantoms as well as in terms of optimal correlation to ground truth blood signal in simulations. Additionally, we propose an adaptive blood independent component sorting and selection method. We show that optimal and adaptive ICA can consistently separate blood from tissue better than principal component analysis (PCA)-based methods using simulations and phantoms. Additionally we demonstrate initial in vivo feasibility in ultrasound data of a liver tumor.
more »
« less
CW_ICA: an efficient dimensionality determination method for independent component analysis
Independent component analysis (ICA) is a widely used blind source separation method for signal pre-processing. The determination of the number of independent components (ICs) is crucial for achieving optimal performance, as an incorrect choice can result in either under-decomposition or over-decomposition. In this study, we propose a robust method to automatically determine the optimal number of ICs, named the column-wise independent component analysis (CW_ICA). CW_ICA divides the mixed signals into two blocks and applies ICA separately to each block. A quantitative measure, derived from the rank-based correlation matrix computed from the ICs of the two blocks, is utilized to determine the optimal number of ICs. The proposed method is validated and compared with the existing determination methods using simulation and scalp EEG data. The results demonstrate that CW_ICA is a reliable and robust approach for determining the optimal number of ICs. It offers computational efficiency and can be seamlessly integrated with different ICA methods.
more »
« less
- Award ID(s):
- 2153492
- PAR ID:
- 10612454
- Publisher / Repository:
- nature portfolio
- Date Published:
- Journal Name:
- Scientific Reports
- Volume:
- 14
- Issue:
- 1
- ISSN:
- 2045-2322
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Data‐driven methods have been widely used in functional magnetic resonance imaging (fMRI) data analysis. They extract latent factors, generally, through the use of a simple generative model. Independent component analysis (ICA) and dictionary learning (DL) are two popular data‐driven methods that are based on two different forms of diversity—statistical properties of the data—statistical independence for ICA and sparsity for DL. Despite their popularity, the comparative advantage of emphasizing one property over another in the decomposition of fMRI data is not well understood. Such a comparison is made harder due to the differences in the modeling assumptions between ICA and DL, as well as within different ICA algorithms where each algorithm exploits a different form of diversity. In this paper, we propose the use of objective global measures, such as time course frequency power ratio, network connection summary, and graph theoretical metrics, to gain insight into the role that different types of diversity have on the analysis of fMRI data. Four ICA algorithms that account for different types of diversity and one DL algorithm are studied. We apply these algorithms to real fMRI data collected from patients with schizophrenia and healthy controls. Our results suggest that no one particular method has the best performance using all metrics, implying that the optimal method will change depending on the goal of the analysis. However, we note that in none of the scenarios we test the highly popular Infomax provides the best performance, demonstrating the cost of exploiting limited form of diversity.more » « less
-
The idea of summarizing the information contained in a large number of variables by a small number of “factors” or “principal components” has been broadly adopted in statistics. This article introduces a generalization of the widely used principal component analysis (PCA) to nonlinear settings, thus providing a new tool for dimension reduction and exploratory data analysis or representation. The distinguishing features of the method include (i) the ability to always deliver truly independent (instead of merely uncorrelated) factors; (ii) the use of optimal transport theory and Brenier maps to obtain a robust and efficient computational algorithm; (iii) the use of a new multivariate additive entropy decomposition to determine the most informative principal nonlinear components, and (iv) formally nesting PCA as a special case for linear Gaussian factor models. We illustrate the method’s effectiveness in an application to excess bond returns prediction from a large number of macro factors. Supplementary materials for this article are available online.more » « less
-
Causal discovery witnessed significant progress over the past decades. In particular,many recent causal discovery methods make use of independent, non-Gaussian noise to achieve identifiability of the causal models. Existence of hidden direct common causes, or confounders, generally makes causal discovery more difficult;whenever they are present, the corresponding causal discovery algorithms canbe seen as extensions of overcomplete independent component analysis (OICA). However, existing OICA algorithms usually make strong parametric assumptions on the distribution of independent components, which may be violated on real data, leading to sub-optimal or even wrong solutions. In addition, existing OICA algorithms rely on the Expectation Maximization (EM) procedure that requires computationally expensive inference of the posterior distribution of independent components. To tackle these problems, we present a Likelihood-Free Overcomplete ICA algorithm (LFOICA1) that estimates the mixing matrix directly byback-propagation without any explicit assumptions on the density function of independent components. Thanks to its computational efficiency, the proposed method makes a number of causal discovery procedures much more practically feasible.For illustrative purposes, we demonstrate the computational efficiency and efficacy of our method in two causal discovery tasks on both synthetic and real data.more » « less
-
Chen, D (Ed.)One of the persistent challenges in multispectral image analysis is the interference caused by dense cloud cover and its resulting shadows, which can significantly obscure surface features. This becomes especially problematic when attempting to monitor surface changes over time using satellite imagery, such as from Landsat-8. In this study, rather than simply masking visual obstructions, we aimed to investigate the role and influence of clouds within the spectral data itself. To achieve this, we employed Independent Component Analysis (ICA), a statistical method capable of decomposing mixed signals into independent source components. By applying ICA to selected Landsat-8 bands and analyzing each component individually, we assessed the extent to which cloud signatures are entangled with surface data. This process revealed that clouds contribute to multiple ICA components simultaneously, indicating their broad spectral influence. With this influence on multiple wavebands, we managed to configure a set of components that could perfectly delineate the extent and location of clouds. Moreover, because Landsat-8 lacks cloud-penetrating wavebands, such as those in the microwave range (e.g., SAR), the surface information beneath dense cloud cover is not captured at all, making it physically impossible for ICA to recover what is not sensed in the first place. Despite these limitations, ICA proved effective in isolating and delineating cloud structures, allowing us to selectively suppress them in reconstructed images. Additionally, the technique successfully highlighted features such as water bodies, vegetation, and color-based land cover differences. These findings suggest that while ICA is a powerful tool for signal separation and cloud-related artifact suppression, its performance is ultimately constrained by the spectral and spatial properties of the input data. Future improvements could be realized by integrating data from complementary sensors—especially those operating in cloud-penetrating wavelengths—or by using higher spectral resolution imagery with narrower bands.more » « less
An official website of the United States government

