skip to main content


Title: StyPath: Style-Transfer Data Augmentation for Robust Histology Image Classification
The classification of Antibody Mediated Rejection (AMR) in kidney transplant remains challenging even for experienced nephropathologists; this is partly because histological tissue stain analysis is often characterized by low inter-observer agreement and poor reproducibility. One of the implicated causes for inter-observer disagreement is the variability of tissue stain quality between (and within) pathology labs, coupled with the gradual fading of archival sections. Variations in stain colors and intensities can make tissue evaluation difficult for pathologists, ultimately affecting their ability to describe relevant morphological features. Being able to accurately predict the AMR status based on kidney histology images is crucial for improving patient treatment and care. We propose a novel pipeline to build robust deep neural networks for AMR classification based on StyPath, a histological data augmentation technique that leverages a light weight style-transfer algorithm as a means to reduce sample-specific bias. Each image was generated in 1.84 ± 0.03 s using a single GTX TITAN V gpu and pytorch, making it faster than other popular histological data augmentation techniques. We evaluated our model using a Monte Carlo (MC) estimate of Bayesian performance and generate an epistemic measure of uncertainty to compare both the baseline and StyPath augmented models. We also generated Grad-CAM representations of the results which were assessed by an experienced nephropathologist; we used this qualitative analysis to elucidate on the assumptions being made by each model. Our results imply that our style-transfer augmentation technique improves histological classification performance (reducing error from 14.8% to 11.5%) and generalization ability.  more » « less
Award ID(s):
1910973
NSF-PAR ID:
10282856
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
International Conference on Medical Image Computing and Computer Assisted Intervention
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts. However, this process is time consuming, labour intensive, expensive and destructive to the specimen. Recently, the ability to virtually stain unlabelled tissue sections, entirely avoiding the histochemical staining step, has been demonstrated using tissue-stain-specific deep neural networks. Here, we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images, in which different stains are merged following a micro-structure map defined by the user. This approach uses a single deep neural network that receives two different sources of information as its input: (1) autofluorescence images of the label-free tissue sample and (2) a “digital staining matrix”, which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section. This digital staining matrix is also used to virtually blend existing stains, digitally synthesizing new histological stains. We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin (H&E), Jones’ silver stain, and Masson’s trichrome stain. Using a single network, this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section, which is currently not feasible with standard histochemical staining methods.

     
    more » « less
  2. Multiplex immunofluorescence (MxIF) is an advanced molecular imaging technique that can simultaneously provide biologists with multiple (i.e., more than 20) molecular markers on a single histological tissue section. Unfortunately, due to imaging restrictions, the more routinely used hematoxylin and eosin (H&E) stain is typically unavailable with MxIF on the same tissue section. As biological H&E staining is not feasible, previous efforts have been made to obtain H&E whole slide image (WSI) from MxIF via deep learning empowered virtual staining. However, the tiling effect is a long-lasting problem in high-resolution WSI-wise synthesis. The MxIF to H&E synthesis is no exception. Limited by computational resources, the cross-stain image synthesis is typically performed at the patch-level. Thus, discontinuous intensities might be visually identified along with the patch boundaries assembling all individual patches back to a WSI. In this work, we propose a deep learning based unpaired high-resolution image synthesis method to obtain virtual H&E WSIs from MxIF WSIs (each with 27 markers/stains) with reduced tiling effects. Briefly, we first extend the CycleGAN framework by adding simultaneous nuclei and mucin segmentation supervision as spatial constraints. Then, we introduce a random sliding window shifting strategy during the optimized inference stage to alleviate the tiling effects. The validation results show that our spatially constrained synthesis method achieves a 56% performance gain for the downstream cell segmentation task. The proposed inference method reduces the tiling effects by using 50% fewer computation resources without compromising performance. The proposed random sliding window inference method is a plug-and-play module, which can be generalized and used for other high-resolution WSI image synthesis applications. The source code with our proposed model are available at https://github.com/MASILab/RandomWalkSlidingWindow.git 
    more » « less
  3. Abstract

    Using a deep neural network, we demonstrate a digital staining technique, which we term PhaseStain, to transform the quantitative phase images (QPI) of label-free tissue sections into images that are equivalent to the brightfield microscopy images of the same samples that are histologically stained. Through pairs of image data (QPI and the corresponding brightfield images, acquired after staining), we train a generative adversarial network and demonstrate the effectiveness of this virtual-staining approach using sections of human skin, kidney, and liver tissue, matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin, Jones’ stain, and Masson’s trichrome stain, respectively. This digital-staining framework may further strengthen various uses of label-free QPI techniques in pathology applications and biomedical research in general, by eliminating the need for histological staining, reducing sample preparation related costs and saving time. Our results provide a powerful example of some of the unique opportunities created by data-driven image transformations enabled by deep learning.

     
    more » « less
  4. Abstract

    Photography with small unmanned aircraft systems (sUAS) offers opportunities for researchers to better understand habitat selection in wildlife, especially for species that select habitat from an aerial perspective (e.g., many bird species). The growing number of commercialsUASbeing flown by recreational users represents a potentially valuable source of data for documenting and studying wildlife habitat. We used a commercially available quadcoptersUASwith a visible spectrum camera to classify habitat for American Kestrels (Falco sparverius; Aves), as well as to evaluate aspects of image processing and postprocessing relevant to a simple habitat analysis using citizen science photography. We investigated inter–observer repeatability of habitat classification, effectiveness of cross‐image classification and Gaussian filtering, and sensitivity to classification resolution. We photographed vegetation around nests from both 25 m and 50 m above takeoff elevation, and analyzed images via maximum likelihood supervised classification. Our results indicate that commercial off‐the‐shelfsUASphotography can distinguish between grass, herbaceous, woody, bare ground, and human‐modified cover classes with good (kappa > 0.6) or strong (kappa > 0.8) accuracy using a 0.25 m2minimum patch size for aggregation. There was inter‐subject variability in designating training samples, but high repeatability of supervised classification accuracy. Gaussian filtering reduced classification accuracy, while coarser classification resolution out‐performed finer resolution due to “speckling noise.” Image self‐classification significantly outperformed cross‐image classification. Mean classification accuracy metrics (kappa values) across different photo heights differed little, but, importantly, the rank order of images differed noticeably.

     
    more » « less
  5. Pancreatic ductal adenocarcinoma (PDAC) presents a critical global health challenge, and early detection is crucial for improving the 5-year survival rate. Recent medical imaging and computational algorithm advances offer potential solutions for early diagnosis. Deep learning, particularly in the form of convolutional neural networks (CNNs), has demonstrated success in medical image analysis tasks, including classification and segmentation. However, the limited availability of clinical data for training purposes continues to represent a significant obstacle. Data augmentation, generative adversarial networks (GANs), and cross-validation are potential techniques to address this limitation and improve model performance, but effective solutions are still rare for 3D PDAC, where the contrast is especially poor, owing to the high heterogeneity in both tumor and background tissues. In this study, we developed a new GAN-based model, named 3DGAUnet, for generating realistic 3D CT images of PDAC tumors and pancreatic tissue, which can generate the inter-slice connection data that the existing 2D CT image synthesis models lack. The transition to 3D models allowed the preservation of contextual information from adjacent slices, improving efficiency and accuracy, especially for the poor-contrast challenging case of PDAC. PDAC’s challenging characteristics, such as an iso-attenuating or hypodense appearance and lack of well-defined margins, make tumor shape and texture learning challenging. To overcome these challenges and improve the performance of 3D GAN models, our innovation was to develop a 3D U-Net architecture for the generator, to improve shape and texture learning for PDAC tumors and pancreatic tissue. Thorough examination and validation across many datasets were conducted on the developed 3D GAN model, to ascertain the efficacy and applicability of the model in clinical contexts. Our approach offers a promising path for tackling the urgent requirement for creative and synergistic methods to combat PDAC. The development of this GAN-based model has the potential to alleviate data scarcity issues, elevate the quality of synthesized data, and thereby facilitate the progression of deep learning models, to enhance the accuracy and early detection of PDAC tumors, which could profoundly impact patient outcomes. Furthermore, the model has the potential to be adapted to other types of solid tumors, hence making significant contributions to the field of medical imaging in terms of image processing models.

     
    more » « less