Abstract Alterations in vascular networks, including angiogenesis and capillary regression, play key roles in disease, wound healing, and development. The spatial structures of blood vessels can be captured through imaging, but effective characterization of network architecture requires both metrics for quantification and software to carry out the analysis in a high‐throughput and unbiased fashion. We present Rapid Editable Analysis of Vessel Elements Routine (REAVER), an open‐source tool that researchers can use to analyze high‐resolution 2D fluorescent images of blood vessel networks, and assess its performance compared to alternative image analysis programs. Using a dataset of manually analyzed images from a variety of murine tissues as a ground‐truth, REAVER exhibited high accuracy and precision for all vessel architecture metrics quantified, including vessel length density, vessel area fraction, mean vessel diameter, and branchpoint count, along with the highest pixel‐by‐pixel accuracy for the segmentation of the blood vessel network. In instances where REAVER's automated segmentation is inaccurate, we show that combining manual curation with automated analysis improves the accuracy of vessel architecture metrics. REAVER can be used to quantify differences in blood vessel architectures, making it useful in experiments designed to evaluate the effects of different external perturbations (eg, drugs or disease states).
more »
« less
Quantification of variegated Drosophila ommatidia with high-resolution image analysis and machine learning
A longstanding challenge in biology is accurately analyzing images acquired using microscopy. Recently, machine learning (ML) approaches have facilitated detailed quantification of images that were refractile to traditional computation methods. Here, we detail a method for measuring pigments in the complex-mosaic adult Drosophila eye using high-resolution photographs and the pixel classifier ilastik [1]. We compare our results to analyses focused on pigment biochemistry and subjective interpretation, demonstrating general overlap, while highlighting the inverse relationship between accuracy and high-throughput capability of each approach. Notably, no coding experience is necessary for image analysis and pigment quantification. When considering time, resolution, and accuracy, our view is that ML-based image analysis is the preferred method.
more »
« less
- Award ID(s):
- 2145195
- PAR ID:
- 10583196
- Publisher / Repository:
- Oxford Academic
- Date Published:
- Journal Name:
- Biology Methods and Protocols
- Volume:
- 10
- Issue:
- 1
- ISSN:
- 2396-8923
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
With the rise of tiny IoT devices powered by machine learning (ML), many researchers have directed their focus toward compressing models to fit on tiny edge devices. Recent works have achieved remarkable success in compressing ML models for object detection and image classification on microcontrollers with small memory, e.g., 512kB SRAM. However, there remain many challenges prohibiting the deployment of ML systems that require high-resolution images. Due to fundamental limits in memory capacity for tiny IoT devices, it may be physically impossible to store large images without external hardware. To this end, we propose a high-resolution image scaling system for edge ML, called HiRISE, which is equipped with selective region-of-interest (ROI) capability leveraging analog in-sensor image scaling. Our methodology not only significantly reduces the peak memory requirements, but also achieves up to 17.7× reduction in data transfer and energy consumption.more » « less
-
Messinger, David W.; Velez-Reyes, Miguel (Ed.)Recently, multispectral and hyperspectral data fusion models based on deep learning have been proposed to generate images with a high spatial and spectral resolution. The general objective is to obtain images that improve spatial resolution while preserving high spectral content. In this work, two deep learning data fusion techniques are characterized in terms of classification accuracy. These methods fuse a high spatial resolution multispectral image with a lower spatial resolution hyperspectral image to generate a high spatial-spectral hyperspectral image. The first model is based on a multi-scale long short-term memory (LSTM) network. The LSTM approach performs the fusion using a multiple step process that transitions from low to high spatial resolution using an intermediate step capable of reducing spatial information loss while preserving spectral content. The second fusion model is based on a convolutional neural network (CNN) data fusion approach. We present fused images using four multi-source datasets with different spatial and spectral resolutions. Both models provide fused images with increased spatial resolution from 8m to 1m. The obtained fused images using the two models are evaluated in terms of classification accuracy on several classifiers: Minimum Distance, Support Vector Machines, Class-Dependent Sparse Representation and CNN classification. The classification results show better performance in both overall and average accuracy for the images generated with the multi-scale LSTM fusion over the CNN fusionmore » « less
-
Optical coherence tomography (OCT) imaging enables high resolution visualization of sub-surface tissue microstructures. However, OCT image analysis using deep learning is hampered by limited diverse training data to meet performance requirements and high inference latency for real-time applications. To address these challenges, we developed Octascope, a lightweight domain-specific convolutional neural network (CNN) - based model designed for OCT image analysis. Octascope was pre-trained using a curriculum learning approach, which involves sequential training, first on natural images (ImageNet), then on OCT images from retinal, abdominal, and renal tissues, to progressively acquire transferable knowledge. This multi-domain pre-training enables Octascope to generalize across varied tissue types. In two downstream tasks, Octascope demonstrated notable improvements in predictive accuracy compared to alternative approaches. In the epidural tissue detection task, our method surpassed single-task learning with fine-tuning by 9.13% and OCT-specific transfer learning by 5.95% in accuracy. Octascope outperformed VGG16 and ResNet50 by 5.36% and 6.66% in a retinal diagnosis task, respectively. In comparison to a Transformer-based OCT foundation model - RETFound, Octascope delivered 2 to 4.4 times faster inference speed with slightly better predictive accuracies in both downstream tasks. Octascope represented a significant advancement for OCT image analysis by providing an effective balance between computational efficiency and diagnostic accuracy for real-time clinical applications.more » « less
-
Over the last century, direct human modification has been a major driver of coastal wetland degradation, resulting in widespread losses of wetland vegetation and a transition to open water. High-resolution satellite imagery is widely available for monitoring changes in present-day wetlands; however, understanding the rates of wetland vegetation loss over the last century depends on the use of historical panchromatic aerial photographs. In this study, we compared manual image thresholding and an automated machine learning (ML) method in detecting wetland vegetation and open water from historical panchromatic photographs in the Florida Everglades, a subtropical wetland landscape. We compared the same classes delineated in the historical photographs to 2012 multispectral satellite imagery and assessed the accuracy of detecting vegetation loss over a 72 year timescale (1940 to 2012) for a range of minimum mapping units (MMUs). Overall, classification accuracies were >95% across the historical photographs and satellite imagery, regardless of the classification method and MMUs. We detected a 2.3–2.7 ha increase in open water pixels across all change maps (overall accuracies > 95%). Our analysis demonstrated that ML classification methods can be used to delineate wetland vegetation from open water in low-quality, panchromatic aerial photographs and that a combination of images with different resolutions is compatible with change detection. The study also highlights how evaluating a range of MMUs can identify the effect of scale on detection accuracy and change class estimates as well as in determining the most relevant scale of analysis for the process of interest.more » « less
An official website of the United States government

