skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on January 1, 2026

Title: Quantification of variegated Drosophila ommatidia with high-resolution image analysis and machine learning
A longstanding challenge in biology is accurately analyzing images acquired using microscopy. Recently, machine learning (ML) approaches have facilitated detailed quantification of images that were refractile to traditional computation methods. Here, we detail a method for measuring pigments in the complex-mosaic adult Drosophila eye using high-resolution photographs and the pixel classifier ilastik [1]. We compare our results to analyses focused on pigment biochemistry and subjective interpretation, demonstrating general overlap, while highlighting the inverse relationship between accuracy and high-throughput capability of each approach. Notably, no coding experience is necessary for image analysis and pigment quantification. When considering time, resolution, and accuracy, our view is that ML-based image analysis is the preferred method.  more » « less
Award ID(s):
2145195
PAR ID:
10583196
Author(s) / Creator(s):
; ;
Publisher / Repository:
Oxford Academic
Date Published:
Journal Name:
Biology Methods and Protocols
Volume:
10
Issue:
1
ISSN:
2396-8923
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Alterations in vascular networks, including angiogenesis and capillary regression, play key roles in disease, wound healing, and development. The spatial structures of blood vessels can be captured through imaging, but effective characterization of network architecture requires both metrics for quantification and software to carry out the analysis in a high‐throughput and unbiased fashion. We present Rapid Editable Analysis of Vessel Elements Routine (REAVER), an open‐source tool that researchers can use to analyze high‐resolution 2D fluorescent images of blood vessel networks, and assess its performance compared to alternative image analysis programs. Using a dataset of manually analyzed images from a variety of murine tissues as a ground‐truth, REAVER exhibited high accuracy and precision for all vessel architecture metrics quantified, including vessel length density, vessel area fraction, mean vessel diameter, and branchpoint count, along with the highest pixel‐by‐pixel accuracy for the segmentation of the blood vessel network. In instances where REAVER's automated segmentation is inaccurate, we show that combining manual curation with automated analysis improves the accuracy of vessel architecture metrics. REAVER can be used to quantify differences in blood vessel architectures, making it useful in experiments designed to evaluate the effects of different external perturbations (eg, drugs or disease states). 
    more » « less
  2. With the rise of tiny IoT devices powered by machine learning (ML), many researchers have directed their focus toward compressing models to fit on tiny edge devices. Recent works have achieved remarkable success in compressing ML models for object detection and image classification on microcontrollers with small memory, e.g., 512kB SRAM. However, there remain many challenges prohibiting the deployment of ML systems that require high-resolution images. Due to fundamental limits in memory capacity for tiny IoT devices, it may be physically impossible to store large images without external hardware. To this end, we propose a high-resolution image scaling system for edge ML, called HiRISE, which is equipped with selective region-of-interest (ROI) capability leveraging analog in-sensor image scaling. Our methodology not only significantly reduces the peak memory requirements, but also achieves up to 17.7× reduction in data transfer and energy consumption. 
    more » « less
  3. Messinger, David W.; Velez-Reyes, Miguel (Ed.)
    Recently, multispectral and hyperspectral data fusion models based on deep learning have been proposed to generate images with a high spatial and spectral resolution. The general objective is to obtain images that improve spatial resolution while preserving high spectral content. In this work, two deep learning data fusion techniques are characterized in terms of classification accuracy. These methods fuse a high spatial resolution multispectral image with a lower spatial resolution hyperspectral image to generate a high spatial-spectral hyperspectral image. The first model is based on a multi-scale long short-term memory (LSTM) network. The LSTM approach performs the fusion using a multiple step process that transitions from low to high spatial resolution using an intermediate step capable of reducing spatial information loss while preserving spectral content. The second fusion model is based on a convolutional neural network (CNN) data fusion approach. We present fused images using four multi-source datasets with different spatial and spectral resolutions. Both models provide fused images with increased spatial resolution from 8m to 1m. The obtained fused images using the two models are evaluated in terms of classification accuracy on several classifiers: Minimum Distance, Support Vector Machines, Class-Dependent Sparse Representation and CNN classification. The classification results show better performance in both overall and average accuracy for the images generated with the multi-scale LSTM fusion over the CNN fusion 
    more » « less
  4. Over the last century, direct human modification has been a major driver of coastal wetland degradation, resulting in widespread losses of wetland vegetation and a transition to open water. High-resolution satellite imagery is widely available for monitoring changes in present-day wetlands; however, understanding the rates of wetland vegetation loss over the last century depends on the use of historical panchromatic aerial photographs. In this study, we compared manual image thresholding and an automated machine learning (ML) method in detecting wetland vegetation and open water from historical panchromatic photographs in the Florida Everglades, a subtropical wetland landscape. We compared the same classes delineated in the historical photographs to 2012 multispectral satellite imagery and assessed the accuracy of detecting vegetation loss over a 72 year timescale (1940 to 2012) for a range of minimum mapping units (MMUs). Overall, classification accuracies were >95% across the historical photographs and satellite imagery, regardless of the classification method and MMUs. We detected a 2.3–2.7 ha increase in open water pixels across all change maps (overall accuracies > 95%). Our analysis demonstrated that ML classification methods can be used to delineate wetland vegetation from open water in low-quality, panchromatic aerial photographs and that a combination of images with different resolutions is compatible with change detection. The study also highlights how evaluating a range of MMUs can identify the effect of scale on detection accuracy and change class estimates as well as in determining the most relevant scale of analysis for the process of interest. 
    more » « less
  5. Mid-infrared Spectroscopic Imaging (MIRSI) provides spatially-resolved molecular specificity by measuring wavelength-dependent mid-infrared absorbance. Infrared microscopes use large numerical aperture objectives to obtain high-resolution images of heterogeneous samples. However, the optical resolution is fundamentally diffraction-limited, and therefore wavelength-dependent. This significantly limits resolution in infrared microscopy, which relies on long wavelengths (2.5 μm to 12.5 μm) for molecular specificity. The resolution is particularly restrictive in biomedical and materials applications, where molecular information is encoded in the fingerprint region (6 μm to 12 μm), limiting the maximum resolving power to between 3 μm and 6 μm. We present an unsupervised curvelet-based image fusion method that overcomes limitations in spatial resolution by augmenting infrared images with label-free visible microscopy. We demonstrate the effectiveness of this approach by fusing images of breast and ovarian tumor biopsies acquired using both infrared and dark-field microscopy. The proposed fusion algorithm generates a hyperspectral dataset that has both high spatial resolution and good molecular contrast. We validate this technique using multiple standard approaches and through comparisons to super-resolved experimentally measured photothermal spectroscopic images. We also propose a novel comparison method based on tissue classification accuracy. 
    more » « less