The immunohistochemical (IHC) staining of the human epidermal growth factor receptor 2 (HER2) biomarker is widely practiced in breast tissue analysis, preclinical studies, and diagnostic decisions, guiding cancer treatment and investigation of pathogenesis. HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist, which typically takes one day to prepare in a laboratory, increasing analysis time and associated costs. Here, we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis, in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs) to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts. A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts. This virtual HER2 staining framework bypasses the costly, laborious, and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.
more »
« less
Virtual Staining of Defocused Autofluorescence Images of Unlabeled Tissue Using Deep Neural Networks
Deep learning-based virtual staining was developed to introduce image contrast to label-free tissue sections, digitally matching the histological staining, which is time-consuming, labor-intensive, and destructive to tissue. Standard virtual staining requires high autofocusing precision during the whole slide imaging of label-free tissue, which consumes a significant portion of the total imaging time and can lead to tissue photodamage. Here, we introduce a fast virtual staining framework that can stain defocused autofluorescence images of unlabeled tissue, achieving equivalent performance to virtual staining of in-focus label-free images, also saving significant imaging time by lowering the microscope’s autofocusing precision. This framework incorporates a virtual autofocusing neural network to digitally refocus the defocused images and then transforms the refocused images into virtually stained images using a successive network. These cascaded networks form a collaborative inference scheme: the virtual staining model regularizes the virtual autofocusing network through a style loss during the training. To demonstrate the efficacy of this framework, we trained and blindly tested these networks using human lung tissue. Using 4× fewer focus points with 2× lower focusing precision, we successfully transformed the coarsely-focused autofluorescence images into high-quality virtually stained H&E images, matching the standard virtual staining framework that used finely-focused autofluorescence input images. Without sacrificing the staining quality, this framework decreases the total image acquisition time needed for virtual staining of a label-free whole-slide image (WSI) by ~32%, together with a ~89% decrease in the autofocusing time, and has the potential to eliminate the laborious and costly histochemical staining process in pathology.
more »
« less
- Award ID(s):
- 2141157
- PAR ID:
- 10386106
- Date Published:
- Journal Name:
- Intelligent Computing
- Volume:
- 2022
- ISSN:
- 2771-5892
- Page Range / eLocation ID:
- 1 to 13
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts. However, this process is time consuming, labour intensive, expensive and destructive to the specimen. Recently, the ability to virtually stain unlabelled tissue sections, entirely avoiding the histochemical staining step, has been demonstrated using tissue-stain-specific deep neural networks. Here, we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images, in which different stains are merged following a micro-structure map defined by the user. This approach uses a single deep neural network that receives two different sources of information as its input: (1) autofluorescence images of the label-free tissue sample and (2) a “digital staining matrix”, which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section. This digital staining matrix is also used to virtually blend existing stains, digitally synthesizing new histological stains. We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin (H&E), Jones’ silver stain, and Masson’s trichrome stain. Using a single network, this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section, which is currently not feasible with standard histochemical staining methods.more » « less
-
Ferraro, Pietro; Grilli, Simonetta; Psaltis, Demetri (Ed.)Deep learning techniques create new opportunities to revolutionize tissue staining methods by digitally generating histological stains using trained neural networks, providing rapid, cost-effective, accurate and environmentally friendly alternatives to standard chemical staining methods. These deep learning-based virtual staining techniques can successfully generate different types of histological stains, including immunohistochemical stains, from label-free microscopic images of unstained samples by using, e.g., autofluorescence microscopy, quantitative phase imaging (QPI) and reflectance confocal microscopy. Similar approaches were also demonstrated for transforming images of an already stained tissue sample into another type of stain, performing virtual stain-to-stain transformations. In this presentation, I will provide an overview of our recent work on the use of deep neural networks for label-free tissue staining, also covering their biomedical applications.more » « less
-
We present a method to generate multiple virtual stains on an image of label-free tissue using a single deep neural network, which is fed with the autofluorescence images of the unlabeled tissue alongside a user-defined digital-staining matrix. Users can indicate which stain to apply on each pixel by editing the digital-staining matrix and blend multiple virtual stains, creating entirely new stain combinations.more » « less
-
Reflectance confocal microscopy (RCM) is a noninvasive optical imaging modality that allows for cellular-level resolution, in vivo images of skin without performing a traditional skin biopsy. RCM image interpretation currently requires specialized training to interpret the grayscale output images that are difficult to correlate with tissue pathology. Here, we use a deep learning-based framework that uses a convolutional neural network to transform grayscale output images into virtually-stained hematoxylin and eosin (H&E)-like images allowing for the visualization of various skin layers, including the epidermis, dermal-epidermal junction, and superficial dermis layers. To train the deep-learning framework, a stack of a minimum of 7 time-lapsed, successive RCM images of excised tissue were obtained from epidermis to dermis 1.52 microns apart to a depth of 60.96 microns using the Vivascope 3000. The tissue was embedded in agarose tissue and a curette was used to create a tunnel through which drops of 50% acetic acid was used to stain cell nuclei. These acetic acid-stained images were used as “ground truth” to train a deep convolutional neural network using a conditional generative adversarial network (GAN)-based machine learning algorithm to digitally convert the images into GAN-based H&E-stained digital images. We used the already trained machine learning algorithm and retrained the algorithm with new samples to include squamous neoplasms. Through further training and refinement of the algorithm, high-resolution, histological quality images can be obtained to aid in earlier diagnosis and treatment of cutaneous neoplasms. The overall goal of obtaining biopsy-free virtual histology images with this technology can be used to provide real-time outputs of virtually-stained H&E skin lesions, thus decreasing the need for invasive diagnostic procedures and enabling greater uptake of the technology by the medical community.more » « less