skip to main content


Title: Adaptable physics-based super-resolution for electron backscatter diffraction maps
Abstract

In computer vision, single-image super-resolution (SISR) has been extensively explored using convolutional neural networks (CNNs) on optical images, but images outside this domain, such as those from scientific experiments, are not well investigated. Experimental data is often gathered using non-optical methods, which alters the metrics for image quality. One such example is electron backscatter diffraction (EBSD), a materials characterization technique that maps crystal arrangement in solid materials, which provides insight into processing, structure, and property relationships. We present a broadly adaptable approach for applying state-of-art SISR networks to generate super-resolved EBSD orientation maps. This approach includes quaternion-based orientation recognition, loss functions that consider rotational effects and crystallographic symmetry, and an inference pipeline to convert network output into established visualization formats for EBSD maps. The ability to generate physically accurate, high-resolution EBSD maps with super-resolution enables high-throughput characterization and broadens the capture capabilities for three-dimensional experimental EBSD datasets.

 
more » « less
Award ID(s):
1664172
NSF-PAR ID:
10385591
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
npj Computational Materials
Volume:
8
Issue:
1
ISSN:
2057-3960
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract Background Cryo-EM data generated by electron tomography (ET) contains images for individual protein particles in different orientations and tilted angles. Individual cryo-EM particles can be aligned to reconstruct a 3D density map of a protein structure. However, low contrast and high noise in particle images make it challenging to build 3D density maps at intermediate to high resolution (1–3 Å). To overcome this problem, we propose a fully automated cryo-EM 3D density map reconstruction approach based on deep learning particle picking. Results A perfect 2D particle mask is fully automatically generated for every single particle. Then, it uses a computer vision image alignment algorithm (image registration) to fully automatically align the particle masks. It calculates the difference of the particle image orientation angles to align the original particle image. Finally, it reconstructs a localized 3D density map between every two single-particle images that have the largest number of corresponding features. The localized 3D density maps are then averaged to reconstruct a final 3D density map. The constructed 3D density map results illustrate the potential to determine the structures of the molecules using a few samples of good particles. Also, using the localized particle samples (with no background) to generate the localized 3D density maps can improve the process of the resolution evaluation in experimental maps of cryo-EM. Tested on two widely used datasets, Auto3DCryoMap is able to reconstruct good 3D density maps using only a few thousand protein particle images, which is much smaller than hundreds of thousands of particles required by the existing methods. Conclusions We design a fully automated approach for cryo-EM 3D density maps reconstruction (Auto3DCryoMap). Instead of increasing the signal-to-noise ratio by using 2D class averaging, our approach uses 2D particle masks to produce locally aligned particle images. Auto3DCryoMap is able to accurately align structural particle shapes. Also, it is able to construct a decent 3D density map from only a few thousand aligned particle images while the existing tools require hundreds of thousands of particle images. Finally, by using the pre-processed particle images,Auto3DCryoMap reconstructs a better 3D density map than using the original particle images. 
    more » « less
  2. Electron Backscatter Diffraction (EBSD) is a widely used approach for characterising the microstructure of various materials. However, it is difficult to accurately distinguish similar (body centred cubic and body centred tetragonal, with small tetragonality) phases in steels using standard EBSD software. One method to tackle the problem of phase distinction is to measure the tetragonality of the phases, which can be done using simulated patterns and cross‐correlation techniques to detect distortion away from a perfectly cubic crystal lattice. However, small errors in the determination of microscope geometry (the so‐called pattern or projection centre) can cause significant errors in tetragonality measurement and lead to erroneous results. This paper utilises a new approach for accurate pattern centre determination via a strain minimisation routine across a large number of grains in dual phase steels. Tetragonality maps are then produced and used to identify phase and estimate local carbon content. The technique is implemented using both kinetically simulated and dynamically simulated patterns to determine their relative accuracy. Tetragonality maps, and subsequent phase maps, based on dynamically simulated patterns in a point‐by‐point and grain average comparison are found to consistently produce more precise and accurate results, with close to 90% accuracy for grain phase identification, when compared with an image‐quality identification method. The error in tetragonality measurements appears to be of the order of 1%, thus producing a commensurate ∼0.2% error in carbon content estimation. Such an error makes the technique unsuitable for estimation of total carbon content of most commercial steels, which often have carbon levels below 0.1%. However, even in the DP steel for this study (0.1 wt.% carbon) it can be used to map carbon in regions with higher accumulation (such as in martensite with nonhomogeneous carbon content).

    Lay Description

    Electron Backscatter Diffraction (EBSD) is a widely used approach for characterising the microstructure of various materials. However, it is difficult to accurately distinguish similar (BCC and BCT) phases in steels using standard EBSD software due to the small difference in crystal structure. One method to tackle the problem of phase distinction is to measure the tetragonality, or apparent ‘strain’ in the crystal lattice, of the phases. This can be done by comparing experimental EBSD patterns with simulated patterns via cross‐correlation techniques, to detect distortion away from a perfectly cubic crystal lattice. However, small errors in the determination of microscope geometry (the so‐called pattern or projection centre) can cause significant errors in tetragonality measurement and lead to erroneous results. This paper utilises a new approach for accurate pattern centre determination via a strain minimisation routine across a large number of grains in dual phase steels. Tetragonality maps are then produced and used to identify phase and estimate local carbon content. The technique is implemented using both simple kinetically simulated and more complex dynamically simulated patterns to determine their relative accuracy. Tetragonality maps, and subsequent phase maps, based on dynamically simulated patterns in a point‐by‐point and grain average comparison are found to consistently produce more precise and accurate results, with close to 90% accuracy for grain phase identification, when compared with an image‐quality identification method. The error in tetragonality measurements appears to be of the order of 1%, thus producing a commensurate error in carbon content estimation. Such an error makes an estimate of total carbon content particularly unsuitable for low carbon steels; although maps of local carbon content may still be revealing.

    Application of the method developed in this paper will lead to better understanding of the complex microstructures of steels, and the potential to design microstructures that deliver higher strength and ductility for common applications, such as vehicle components.

     
    more » « less
  3. null (Ed.)
    Introduction: Vaso-occlusive crises (VOCs) are a leading cause of morbidity and early mortality in individuals with sickle cell disease (SCD). These crises are triggered by sickle red blood cell (sRBC) aggregation in blood vessels and are influenced by factors such as enhanced sRBC and white blood cell (WBC) adhesion to inflamed endothelium. Advances in microfluidic biomarker assays (i.e., SCD Biochip systems) have led to clinical studies of blood cell adhesion onto endothelial proteins, including, fibronectin, laminin, P-selectin, ICAM-1, functionalized in microchannels. These microfluidic assays allow mimicking the physiological aspects of human microvasculature and help characterize biomechanical properties of adhered sRBCs under flow. However, analysis of the microfluidic biomarker assay data has so far relied on manual cell counting and exhaustive visual morphological characterization of cells by trained personnel. Integrating deep learning algorithms with microscopic imaging of adhesion protein functionalized microfluidic channels can accelerate and standardize accurate classification of blood cells in microfluidic biomarker assays. Here we present a deep learning approach into a general-purpose analytical tool covering a wide range of conditions: channels functionalized with different proteins (laminin or P-selectin), with varying degrees of adhesion by both sRBCs and WBCs, and in both normoxic and hypoxic environments. Methods: Our neural networks were trained on a repository of manually labeled SCD Biochip microfluidic biomarker assay whole channel images. Each channel contained adhered cells pertaining to clinical whole blood under constant shear stress of 0.1 Pa, mimicking physiological levels in post-capillary venules. The machine learning (ML) framework consists of two phases: Phase I segments pixels belonging to blood cells adhered to the microfluidic channel surface, while Phase II associates pixel clusters with specific cell types (sRBCs or WBCs). Phase I is implemented through an ensemble of seven generative fully convolutional neural networks, and Phase II is an ensemble of five neural networks based on a Resnet50 backbone. Each pixel cluster is given a probability of belonging to one of three classes: adhered sRBC, adhered WBC, or non-adhered / other. Results and Discussion: We applied our trained ML framework to 107 novel whole channel images not used during training and compared the results against counts from human experts. As seen in Fig. 1A, there was excellent agreement in counts across all protein and cell types investigated: sRBCs adhered to laminin, sRBCs adhered to P-selectin, and WBCs adhered to P-selectin. Not only was the approach able to handle surfaces functionalized with different proteins, but it also performed well for high cell density images (up to 5000 cells per image) in both normoxic and hypoxic conditions (Fig. 1B). The average uncertainty for the ML counts, obtained from accuracy metrics on the test dataset, was 3%. This uncertainty is a significant improvement on the 20% average uncertainty of the human counts, estimated from the variance in repeated manual analyses of the images. Moreover, manual classification of each image may take up to 2 hours, versus about 6 minutes per image for the ML analysis. Thus, ML provides greater consistency in the classification at a fraction of the processing time. To assess which features the network used to distinguish adhered cells, we generated class activation maps (Fig. 1C-E). These heat maps indicate the regions of focus for the algorithm in making each classification decision. Intriguingly, the highlighted features were similar to those used by human experts: the dimple in partially sickled RBCs, the sharp endpoints for highly sickled RBCs, and the uniform curvature of the WBCs. Overall the robust performance of the ML approach in our study sets the stage for generalizing it to other endothelial proteins and experimental conditions, a first step toward a universal microfluidic ML framework targeting blood disorders. Such a framework would not only be able to integrate advanced biophysical characterization into fast, point-of-care diagnostic devices, but also provide a standardized and reliable way of monitoring patients undergoing targeted therapies and curative interventions, including, stem cell and gene-based therapies for SCD. Disclosures Gurkan: Dx Now Inc.: Patents & Royalties; Xatek Inc.: Patents & Royalties; BioChip Labs: Patents & Royalties; Hemex Health, Inc.: Consultancy, Current Employment, Patents & Royalties, Research Funding. 
    more » « less
  4. Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images’ size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model’s results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings. 
    more » « less
  5. Mid-infrared Spectroscopic Imaging (MIRSI) provides spatially-resolved molecular specificity by measuring wavelength-dependent mid-infrared absorbance. Infrared microscopes use large numerical aperture objectives to obtain high-resolution images of heterogeneous samples. However, the optical resolution is fundamentally diffraction-limited, and therefore wavelength-dependent. This significantly limits resolution in infrared microscopy, which relies on long wavelengths (2.5 μm to 12.5 μm) for molecular specificity. The resolution is particularly restrictive in biomedical and materials applications, where molecular information is encoded in the fingerprint region (6 μm to 12 μm), limiting the maximum resolving power to between 3 μm and 6 μm. We present an unsupervised curvelet-based image fusion method that overcomes limitations in spatial resolution by augmenting infrared images with label-free visible microscopy. We demonstrate the effectiveness of this approach by fusing images of breast and ovarian tumor biopsies acquired using both infrared and dark-field microscopy. The proposed fusion algorithm generates a hyperspectral dataset that has both high spatial resolution and good molecular contrast. We validate this technique using multiple standard approaches and through comparisons to super-resolved experimentally measured photothermal spectroscopic images. We also propose a novel comparison method based on tissue classification accuracy. 
    more » « less