skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2141157

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Histological staining is the gold standard for tissue examination in clinical pathology and life-science research, which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue. However, the current histological staining workflow requires tedious sample preparation steps, specialized laboratory infrastructure, and trained histotechnologists, making it expensive, time-consuming, and not accessible in resource-limited settings. Deep learning techniques created new opportunities to revolutionize staining methods by digitally generating histological stains using trained neural networks, providing rapid, cost-effective, and accurate alternatives to standard chemical staining methods. These techniques, broadly referred to asvirtual staining, were extensively explored by multiple research groups and demonstrated to be successful in generating various types of histological stains from label-free microscopic images of unstained samples; similar approaches were also used for transforming images of an already stained tissue sample into another type of stain, performing virtual stain-to-stain transformations. In this Review, we provide a comprehensive overview of the recent research advances in deep learning-enabled virtual histological staining techniques. The basic concepts and the typical workflow of virtual staining are introduced, followed by a discussion of representative works and their technical innovations. We also share our perspectives on the future of this emerging field, aiming to inspire readers from diverse scientific fields to further expand the scope of deep learning-enabled virtual histological staining techniques and their applications. 
    more » « less
  2. Defining the presence of residual tumor and margins may enhance tissue sparing in dermatologic surgery, but no device serves this role. Reflectance Confocal Microscopy (RCM) provides non-invasive cellular-level resolution of the skin, but the FDA-approved RCM device is rigid and requires a tissue cap making tissue mapping difficult. We previously applied “virtual histology”, a deep-learning algorithm to RCM images to generate biopsy-free histology, however, whether virtual histology can be applied to images obtained with a portable, handheld RCM device to scan for residual tumor and margins is unknown. We hypothesize that combining a handheld device with virtual histology could provide accurate tumor assessment and these virtual histology images would correlate with traditional histology. The study was conducted as a prospective, consecutive non-randomized trial at a VA Medical Center dermatologic surgery clinic. Patients over 18 years old with confirmed BCC, SCC, or SCCis were included. Successive in-vivo confocal images from the epidermis and dermis were obtained with the handheld device and processed through a conditional generative adversarial network-based algorithm to create H&E pseudo-stained virtual histology. The algorithm produced similar virtual histology of in-vivo RCM images from the handheld and standard device, demonstrating successful application to the handheld device. Virtual histology applied to handheld RCM images capturing residual tumor, precancerous lesions (actinic keratosis) and scar tissue correlated with Mohs frozen section histology from excised tissue. The combination of machine-learning based virtual histology with handheld RCM images may provide histology-quality data in real time for tumor evaluation to assist the surgeon, improving clinical efficiency by decreasing unnecessary surgeries/layers and cosmesis through better margin assessment. 
    more » « less
  3. Reflectance confocal microscopy (RCM) is a noninvasive optical imaging technique that uses a laser to capture cellular-level resolution images based on differing refractive indices of tissue elements. RCM image interpretation is challenging and requires training to interpret and correlate the grayscale output images that lack nuclear features with tissue pathology. Here, we utilize a deep learning-based framework that uses a convolutional neural network to transform grayscale images into virtually-stained hematoxylin and eosin (H&E)-like images enabling the visualization of various skin layers. To train the deep-learning framework, a series of a minimum of 7 time-lapsed, successive “stacks” of RCM images of excised tissue, spaced 1.52μm apart to a depth of 60.96μm were obtained using the Vivascope 1500. The tissue samples were stained with a 50% acetic acid solution to enhance cell nuclei. These images served as the “ground truth” to train a deep convolutional neural network with a conditional generative adversarial network (GAN)-based machine learning algorithm to digitally convert the images into GAN-based H&E-stained digital images. The machine learning algorithm was initially trained and subsequently retrained with new samples, specifically focusing on squamous neoplasms. The trained algorithm was applied to skin lesions that had a clinical differential diagnosis of squamous neoplasms including squamous cell carcinoma, actinic keratosis, seborrheic keratosis, and basal cell carcinoma. Through continuous training and refinement, the algorithm was able to produce high-resolution, histological quality images of different squamous neoplasms. This algorithm may be used in the future to facilitate earlier diagnosis of cutaneous neoplasms and enable greater uptake of noninvasive imaging technology within the medical community. 
    more » « less
  4. The ability to accurately define tumor margins may enhance tissue sparing and increase efficiency in the dermatologic surgery process, but no device exists that serves this role. Reflectance Confocal Microscopy (RCM) provides non-invasive cellular resolution of the skin. The only clinically-approved RCM device is bulky, non-portable, and requires a tissue cap which makes mapping of the underlying tissue impossible. We recently combined “virtual histology”, a machine learning algorithm with RCM images from this standard RCM device to generate biopsy-free histology to overcome these limitations. Whether virtual histology can be used with a portable, handheld RCM device to scan for residual tumor and tumor margins is currently unknown. We hypothesize that combining a handheld RCM device with virtual histology could provide accurate tumor margin assessment. We determined whether our established virtual histology algorithm could be applied to images from a portable RCM device and whether these pseudo-stained virtual histology images correlated with histology from skin specimens. The study was conducted as a prospective, consecutive non-randomized trial at a Veterans Affairs Medical Center dermatologic surgery clinic. All patients greater than 18 years of age with previously biopsied BCC, SCC, or SCCis were included. Successive confocal images from the epidermis to the dermis were obtained 1.5 microns apart from the handheld RCM device to detect residual skin cancer. The handheld, in-vivo RCM images were processed through a conditional generative adversarial network-based machine learning algorithm to digitally convert the images into H&E pseudo-stained virtual histology images. Virtual histology of in-vivo RCM images from unbiopsied skin captured with the portable RCM device were similar to those obtained with the standard RCM device and virtual histology applied to portable RCM images correctly correlated with frozen section histology. Residual tumors detected with virtual histology generated from the portable RCM images accurately corresponded with residual tumors shown in the frozen surgical tissue specimen. Residual tumor was also not detected when excised tissue was clear of tumor following surgical procedure. Thus, the combination of virtual histology with portable RCM may provide accurate histology-quality data for evaluation of residual skin cancer prior to surgery. Combining machine learning-based virtual histology with handheld RCM images demonstrates promise in providing insights into tumor characteristics and has the potential to assist the surgeon and better guide practice decisions to more efficiently serve patients, leading to decreased layers and appointment times. Future work is needed to provide real-time virtual histology, convert horizontal/confocal sections into vertical or 3D sections, and to perform clinical studies to map tumors in tissue. 
    more » « less
  5. Reflectance confocal microscopy (RCM) is a noninvasive optical imaging modality that allows for cellular-level resolution, in vivo images of skin without performing a traditional skin biopsy. RCM image interpretation currently requires specialized training to interpret the grayscale output images that are difficult to correlate with tissue pathology. Here, we use a deep learning-based framework that uses a convolutional neural network to transform grayscale output images into virtually-stained hematoxylin and eosin (H&E)-like images allowing for the visualization of various skin layers, including the epidermis, dermal-epidermal junction, and superficial dermis layers. To train the deep-learning framework, a stack of a minimum of 7 time-lapsed, successive RCM images of excised tissue were obtained from epidermis to dermis 1.52 microns apart to a depth of 60.96 microns using the Vivascope 3000. The tissue was embedded in agarose tissue and a curette was used to create a tunnel through which drops of 50% acetic acid was used to stain cell nuclei. These acetic acid-stained images were used as “ground truth” to train a deep convolutional neural network using a conditional generative adversarial network (GAN)-based machine learning algorithm to digitally convert the images into GAN-based H&E-stained digital images. We used the already trained machine learning algorithm and retrained the algorithm with new samples to include squamous neoplasms. Through further training and refinement of the algorithm, high-resolution, histological quality images can be obtained to aid in earlier diagnosis and treatment of cutaneous neoplasms. The overall goal of obtaining biopsy-free virtual histology images with this technology can be used to provide real-time outputs of virtually-stained H&E skin lesions, thus decreasing the need for invasive diagnostic procedures and enabling greater uptake of the technology by the medical community. 
    more » « less
  6. Ferraro, Pietro; Grilli, Simonetta; Psaltis, Demetri (Ed.)
    Deep learning techniques create new opportunities to revolutionize tissue staining methods by digitally generating histological stains using trained neural networks, providing rapid, cost-effective, accurate and environmentally friendly alternatives to standard chemical staining methods. These deep learning-based virtual staining techniques can successfully generate different types of histological stains, including immunohistochemical stains, from label-free microscopic images of unstained samples by using, e.g., autofluorescence microscopy, quantitative phase imaging (QPI) and reflectance confocal microscopy. Similar approaches were also demonstrated for transforming images of an already stained tissue sample into another type of stain, performing virtual stain-to-stain transformations. In this presentation, I will provide an overview of our recent work on the use of deep neural networks for label-free tissue staining, also covering their biomedical applications. 
    more » « less
  7. Shaked, Natan T.; Hayden, Oliver (Ed.)
    We report label-free, in vivo virtual histology of skin using reflectance confocal microscopy (RCM). We trained a deep neural network to transform in vivo RCM images of unstained skin into virtually stained H&E-like microscopic images with nuclear contrast. This framework successfully generalized to diverse skin conditions, e.g., normal skin, basal cell carcinoma, and melanocytic nevi, as well as distinct skin layers, including the epidermis, dermal-epidermal junction, and superficial dermis layers. This label-free in vivo skin virtual histology framework can be transformative for faster and more accurate diagnosis of malignant skin neoplasms, with the potential to significantly reduce unnecessary skin biopsies. 
    more » « less
  8. Deep learning-based virtual staining was developed to introduce image contrast to label-free tissue sections, digitally matching the histological staining, which is time-consuming, labor-intensive, and destructive to tissue. Standard virtual staining requires high autofocusing precision during the whole slide imaging of label-free tissue, which consumes a significant portion of the total imaging time and can lead to tissue photodamage. Here, we introduce a fast virtual staining framework that can stain defocused autofluorescence images of unlabeled tissue, achieving equivalent performance to virtual staining of in-focus label-free images, also saving significant imaging time by lowering the microscope’s autofocusing precision. This framework incorporates a virtual autofocusing neural network to digitally refocus the defocused images and then transforms the refocused images into virtually stained images using a successive network. These cascaded networks form a collaborative inference scheme: the virtual staining model regularizes the virtual autofocusing network through a style loss during the training. To demonstrate the efficacy of this framework, we trained and blindly tested these networks using human lung tissue. Using 4× fewer focus points with 2× lower focusing precision, we successfully transformed the coarsely-focused autofluorescence images into high-quality virtually stained H&E images, matching the standard virtual staining framework that used finely-focused autofluorescence input images. Without sacrificing the staining quality, this framework decreases the total image acquisition time needed for virtual staining of a label-free whole-slide image (WSI) by ~32%, together with a ~89% decrease in the autofocusing time, and has the potential to eliminate the laborious and costly histochemical staining process in pathology. 
    more » « less
  9. Volpe, Giovanni; Pereira, Joana B.; Brunner, Daniel; Ozcan, Aydogan (Ed.)
    Reflectance confocal microscopy (RCM) can provide in vivo images of the skin with cellular-level resolution; however, RCM images are grayscale, lack nuclear features and have a low correlation with histology. We present a deep learning-based virtual staining method to perform non-invasive virtual histology of the skin based on in vivo, label-free RCM images. This virtual histology framework revealed successful inference for various skin conditions, such as basal cell carcinoma, also covering distinct skin layers, including epidermis and dermal-epidermal junction. This method can pave the way for faster and more accurate diagnosis of malignant skin neoplasms while reducing unnecessary biopsies. 
    more » « less