Alterations in vascular networks, including angiogenesis and capillary regression, play key roles in disease, wound healing, and development. The spatial structures of blood vessels can be captured through imaging, but effective characterization of network architecture requires both metrics for quantification and software to carry out the analysis in a high‐throughput and unbiased fashion. We present Rapid Editable Analysis of Vessel Elements Routine (REAVER), an open‐source tool that researchers can use to analyze high‐resolution 2D fluorescent images of blood vessel networks, and assess its performance compared to alternative image analysis programs. Using a dataset of manually analyzed images from a variety of murine tissues as a ground‐truth, REAVER exhibited high accuracy and precision for all vessel architecture metrics quantified, including vessel length density, vessel area fraction, mean vessel diameter, and branchpoint count, along with the highest pixel‐by‐pixel accuracy for the segmentation of the blood vessel network. In instances where REAVER's automated segmentation is inaccurate, we show that combining manual curation with automated analysis improves the accuracy of vessel architecture metrics. REAVER can be used to quantify differences in blood vessel architectures, making it useful in experiments designed to evaluate the effects of different external perturbations (eg, drugs or disease states).
more » « less- Award ID(s):
- 2048991
- PAR ID:
- 10456980
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Microcirculation
- Volume:
- 27
- Issue:
- 5
- ISSN:
- 1073-9688
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Purpose: Parkinson’s Disease (PD) is the second most common form of neural degeneration and defined by the decay of dopaminergic cells in the substantia nigra. The current standard for diagnosing PD occurs once 80% of dopaminergic cells have decayed. The degradation of these cells has been shown to create thinning of the retina walls and retina microvasculature. This work serves to find machine learning techniques to provide PD diagnosis using non-invasive fundus eye images. Materials and Methods: Two age and gender matched datasets where constructed using data from the UK Biobank (UKB) and data collected at the University of Florida (UF). The first dataset consists of 476 fundus eye images, 238 CN and 238 PD, sourced entirely from the UKB database. The second dataset, UF-UKB, consist of 100 images, 28 CN and 72 PD, collected at UF and 44 CN images from UKB. A second set of datasets, UKB-Green and UF-UKB-Green, were created using the green color channels to improve vessel segmentation. Vessel segmentation was performed using U-Net segmentation network. The vessel maps served as inputs to SVM classifying networks. Saliency maps were created to assess areas of interest for the networks. Results: The top performing SVM network for the UKB and UKB-Green datasets were the sigmoid SVM networks which achieved accuracies of .698 and .719 respectively. Meanwhile the top performing networks for the UF-UKB and UF-UKB-Green datasets where the linear SVM networks which achieved accuracies of .821 and .857 respectively. The saliency maps indicate that the different networks focused on different vessel structures with the most successful networks focusing more on smaller vessels. Conclusion: The results indicate that the machine learning networks can classify PD based on retina vasculature, with the key features being smaller blood vessels. The proposed methods further support the idea that changes in brain physiology can be observed in the eye. Machine learning networks can be applied to clinically available data and still provide accurate predictions Clinical Relevance statement, not to exceed 200 characters: The work illustrates the feasibility of utilizing eye images as a potential method for diagnosing PD, opposed to the current method of using motor symptoms.more » « less
-
Objective and Impact Statement . We present a fully automated hematological analysis framework based on single-channel (single-wavelength), label-free deep-ultraviolet (UV) microscopy that serves as a fast, cost-effective alternative to conventional hematology analyzers. Introduction . Hematological analysis is essential for the diagnosis and monitoring of several diseases but requires complex systems operated by trained personnel, costly chemical reagents, and lengthy protocols. Label-free techniques eliminate the need for staining or additional preprocessing and can lead to faster analysis and a simpler workflow. In this work, we leverage the unique capabilities of deep-UV microscopy as a label-free, molecular imaging technique to develop a deep learning-based pipeline that enables virtual staining, segmentation, classification, and counting of white blood cells (WBCs) in single-channel images of peripheral blood smears. Methods . We train independent deep networks to virtually stain and segment grayscale images of smears. The segmented images are then used to train a classifier to yield a quantitative five-part WBC differential. Results. Our virtual staining scheme accurately recapitulates the appearance of cells under conventional Giemsa staining, the gold standard in hematology. The trained cellular and nuclear segmentation networks achieve high accuracy, and the classifier can achieve a quantitative five-part differential on unseen test data. Conclusion . This proposed automated hematology analysis framework could greatly simplify and improve current complete blood count and blood smear analysis and lead to the development of a simple, fast, and low-cost, point-of-care hematology analyzer.more » « less
-
Abstract Sickle cell disease (SCD) is a major public health priority throughout much of the world, affecting millions of people. In many regions, particularly those in resource-limited settings, SCD is not consistently diagnosed. In Africa, where the majority of SCD patients reside, more than 50% of the 0.2–0.3 million children born with SCD each year will die from it; many of these deaths are in fact preventable with correct diagnosis and treatment. Here, we present a deep learning framework which can perform automatic screening of sickle cells in blood smears using a smartphone microscope. This framework uses two distinct, complementary deep neural networks. The first neural network enhances and standardizes the blood smear images captured by the smartphone microscope, spatially and spectrally matching the image quality of a laboratory-grade benchtop microscope. The second network acts on the output of the first image enhancement neural network and is used to perform the semantic segmentation between healthy and sickle cells within a blood smear. These segmented images are then used to rapidly determine the SCD diagnosis per patient. We blindly tested this mobile sickle cell detection method using blood smears from 96 unique patients (including 32 SCD patients) that were imaged by our smartphone microscope, and achieved ~98% accuracy, with an area-under-the-curve of 0.998. With its high accuracy, this mobile and cost-effective method has the potential to be used as a screening tool for SCD and other blood cell disorders in resource-limited settings.
-
Abstract One‐dimensional (1D) cardiovascular models offer a non‐invasive method to answer medical questions, including predictions of wave‐reflection, shear stress, functional flow reserve, vascular resistance and compliance. This model type can predict patient‐specific outcomes by solving 1D fluid dynamics equations in geometric networks extracted from medical images. However, the inherent uncertainty in
in vivo imaging introduces variability in network size and vessel dimensions, affecting haemodynamic predictions. Understanding the influence of variation in image‐derived properties is essential to assess the fidelity of model predictions. Numerous programs exist to render three‐dimensional surfaces and construct vessel centrelines. Still, there is no exact way to generate vascular trees from the centrelines while accounting for uncertainty in data. This study introduces an innovative framework employing statistical change point analysis to generate labelled trees that encode vessel dimensions and their associated uncertainty from medical images. To test this framework, we explore the impact of uncertainty in 1D haemodynamic predictions in a systemic and pulmonary arterial network. Simulations explore haemodynamic variations resulting from changes in vessel dimensions and segmentation; the latter is achieved by analysing multiple segmentations of the same images. Results demonstrate the importance of accurately defining vessel radii and lengths when generating high‐fidelity patient‐specific haemodynamics models.image Key points This study introduces novel algorithms for generating labelled directed trees from medical images, focusing on accurate junction node placement and radius extraction using change points to provide haemodynamic predictions with uncertainty within expected measurement error.
Geometric features, such as vessel dimension (length and radius) and network size, significantly impact pressure and flow predictions in both pulmonary and aortic arterial networks.
Standardizing networks to a consistent number of vessels is crucial for meaningful comparisons and decreases haemodynamic uncertainty.
Change points are valuable to understanding structural transitions in vascular data, providing an automated and efficient way to detect shifts in vessel characteristics and ensure reliable extraction of representative vessel radii.
-
This research introduces an advanced approach to automate the segmentation and quantification of nuclei in fluorescent images through deep learning techniques. Overcoming inherent challenges such as variations in pixel intensities, noisy boundaries, and overlapping edges, our devised pipeline integrates the U-Net architecture with state-of-the-art CNN models, such as EfficientNet. This fusion maintains the efficiency of U-Net while harnessing the superior capabilities of EfficientNet. Crucially, we exclusively utilize high-quality confocal images generated in-house for model training, purposefully avoiding the pitfalls associated with publicly available synthetic data of lower quality. Our training dataset encompasses over 3000 nuclei boundaries, which are meticulously annotated manually to ensure precision and accuracy in the learning process. Additionally, post-processing is implemented to refine segmentation results, providing morphological quantification for each segmented nucleus. Through comprehensive evaluation, our model achieves notable performance metrics, attaining an F1-score of 87% and an Intersection over Union (IoU) value of 80%. Furthermore, its robustness is demonstrated across diverse datasets sourced from various origins, indicative of its broad applicability in automating nucleus extraction and quantification from fluorescent images. This innovative methodology holds significant promise for advancing research efforts across multiple domains by facilitating a deeper understanding of underlying biological processes through automated analysis of fluorescent imagery.more » « less