skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Adaptable physics-based super-resolution for electron backscatter diffraction maps
Abstract In computer vision, single-image super-resolution (SISR) has been extensively explored using convolutional neural networks (CNNs) on optical images, but images outside this domain, such as those from scientific experiments, are not well investigated. Experimental data is often gathered using non-optical methods, which alters the metrics for image quality. One such example is electron backscatter diffraction (EBSD), a materials characterization technique that maps crystal arrangement in solid materials, which provides insight into processing, structure, and property relationships. We present a broadly adaptable approach for applying state-of-art SISR networks to generate super-resolved EBSD orientation maps. This approach includes quaternion-based orientation recognition, loss functions that consider rotational effects and crystallographic symmetry, and an inference pipeline to convert network output into established visualization formats for EBSD maps. The ability to generate physically accurate, high-resolution EBSD maps with super-resolution enables high-throughput characterization and broadens the capture capabilities for three-dimensional experimental EBSD datasets.  more » « less
Award ID(s):
1664172
PAR ID:
10385591
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
npj Computational Materials
Volume:
8
Issue:
1
ISSN:
2057-3960
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Supervised deep-learning models have enabled super-resolution imaging in several microscopic imaging modalities, increasing the spatial lateral bandwidth of the original input images beyond the diffraction limit. Despite their success, their practical application poses several challenges in terms of the amount of training data and its quality, requiring the experimental acquisition of large, paired databases to generate an accurate generalized model whose performance remains invariant to unseen data. Cycle-consistent generative adversarial networks (cycleGANs) are unsupervised models for image-to-image translation tasks that are trained on unpaired datasets. This paper introduces a cycleGAN framework specifically designed to increase the lateral resolution limit in confocal microscopy by training a cycleGAN model using low- and high-resolution unpaired confocal images of human glioblastoma cells. Training and testing performances of the cycleGAN model have been assessed by measuring specific metrics such as background standard deviation, peak-to-noise ratio, and a customized frequency content measure. Our cycleGAN model has been evaluated in terms of image fidelity and resolution improvement using a paired dataset, showing superior performance than other reported methods. This work highlights the efficacy and promise of cycleGAN models in tackling super-resolution microscopic imaging without paired training, paving the path for turning home-built low-resolution microscopic systems into low-cost super-resolution instruments by means of unsupervised deep learning. 
    more » « less
  2. null (Ed.)
    Abstract Background Cryo-EM data generated by electron tomography (ET) contains images for individual protein particles in different orientations and tilted angles. Individual cryo-EM particles can be aligned to reconstruct a 3D density map of a protein structure. However, low contrast and high noise in particle images make it challenging to build 3D density maps at intermediate to high resolution (1–3 Å). To overcome this problem, we propose a fully automated cryo-EM 3D density map reconstruction approach based on deep learning particle picking. Results A perfect 2D particle mask is fully automatically generated for every single particle. Then, it uses a computer vision image alignment algorithm (image registration) to fully automatically align the particle masks. It calculates the difference of the particle image orientation angles to align the original particle image. Finally, it reconstructs a localized 3D density map between every two single-particle images that have the largest number of corresponding features. The localized 3D density maps are then averaged to reconstruct a final 3D density map. The constructed 3D density map results illustrate the potential to determine the structures of the molecules using a few samples of good particles. Also, using the localized particle samples (with no background) to generate the localized 3D density maps can improve the process of the resolution evaluation in experimental maps of cryo-EM. Tested on two widely used datasets, Auto3DCryoMap is able to reconstruct good 3D density maps using only a few thousand protein particle images, which is much smaller than hundreds of thousands of particles required by the existing methods. Conclusions We design a fully automated approach for cryo-EM 3D density maps reconstruction (Auto3DCryoMap). Instead of increasing the signal-to-noise ratio by using 2D class averaging, our approach uses 2D particle masks to produce locally aligned particle images. Auto3DCryoMap is able to accurately align structural particle shapes. Also, it is able to construct a decent 3D density map from only a few thousand aligned particle images while the existing tools require hundreds of thousands of particle images. Finally, by using the pre-processed particle images,Auto3DCryoMap reconstructs a better 3D density map than using the original particle images. 
    more » « less
  3. Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images’ size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model’s results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings. 
    more » « less
  4. Spectroscopic single-molecule localization microscopy (sSMLM) simultaneously provides spatial localization and spectral information of individual single-molecules emission, offering multicolor super-resolution imaging of multiple molecules in a single sample with the nanoscopic resolution. However, this technique is limited by the requirements of acquiring a large number of frames to reconstruct a super-resolution image. In addition, multicolor sSMLM imaging suffers from spectral cross-talk while using multiple dyes with relatively broad spectral bands that produce cross-color contamination. Here, we present a computational strategy to accelerate multicolor sSMLM imaging. Our method uses deep convolution neural networks to reconstruct high-density multicolor super-resolution images from low-density, contaminated multicolor images rendered using sSMLM datasets with much fewer frames, without compromising spatial resolution. High-quality, super-resolution images are reconstructed using up to 8-fold fewer frames than usually needed. Thus, our technique generates multicolor super-resolution images within a much shorter time, without any changes in the existing sSMLM hardware system. Two-color and three-color sSMLM experimental results demonstrate superior reconstructions of tubulin/mitochondria, peroxisome/mitochondria, and tubulin/mitochondria/peroxisome in fixed COS-7 and U2-OS cells with a significant reduction in acquisition time. 
    more » « less
  5. {"Abstract":["This dataset provides high-resolution Kikuchi diffraction patterns and\n associated orientation mapping data collected from both wrought and\n as-built additively manufactured (AM) Inconel 718 superalloys. The dataset\n includes raw electron backscatter diffraction (EBSD) patterns stored as\n .tif images and organized through .up2 metadata files, along with\n processed orientation data in .ang format. These measurements were\n acquired using a high-sensitivity EBSD detector over large scan areas,\n enabling detailed spatial resolution of microstructural features such as\n grain orientations, subgrain boundaries, and processing-induced texture.\n The dataset supports a range of applications, including machine learning\n for pattern recognition and the development of robust indexing algorithms.\n By including both wrought and AM material states, this dataset offers\n valuable insight into the influence of manufacturing route on\n crystallographic texture and cellular dislocation structure in Inconel\n 718, a critical alloy for high-temperature structural applications."],"Methods":["Materials and\n Sample Preparation: Three different nickel-based\n superalloys were used in this study: a wrought recrystallized Inconel 718\n (30 minutes at 1050°C followed by 8 hours at 720°C) with chemical\n composition of (wt.%) Ni – 0.56% Al – 17.31% Fe – 0.14% Co – 17.97% Cr –\n 5.4% Nb + Ta – 1.00% Ti – 0.023% C – 0.0062% N; a 3D-printed Inconel 718\n by DED (as-built) and a dynamically recrystallized Waspalloy\n (heat-treated) characterized by a necklace microstructure. The 3D-printed\n material was produced using a Formalloy L2 Directed Energy Deposition\n (DED) unit utilizing a 650 W Nuburu 450 nm blue laser capable of achieving\n a 400 μm laser spot size. Argon was used as the shielding and carrier gas,\n and the specimen remained in its as-built condition. The chemical\n composition is in wt.%: Ni – 0.45% Al – 18.77% Fe – 0.07% Co – 18.88% Cr –\n 5.08% Nb – 0.96% Ti – 0.036% C – 0.02% Cu - 0.04% Mn - 0.08% Si - 3.04%\n Mo. All samples were machined by EDM as flat dogbone samples of gauge\n section 1 × 3 mm2. All samples were mechanically\n polished using abrasive papers, followed by diamond suspension down to 3\n μm, and were finished using a 50 nm colloidal silica suspension.\n Electron\n BackScatter Diffraction: EBSD measurements were\n performed on a Thermo Fisher Scios 2 Dual Beam FIB-SEM with an EDAX\n OIM-Hikari detector at an accelerating voltage of 20 kV, current of 6.4\n nA, an exposure time of 8.5 ms per diffraction pattern, 12 mm of working\n distance, and a 70° tilt. In total, 3 maps of 1000 × 900 μm were collected\n with a 1 μm step size, and 4 additional maps were collected at 0.1 μm step\n size. These EBSD maps were saved to .ang files and processed using the\n MTEX toolbox1. For each of these maps, SEM signal, confidence index (CI),\n and image quality (IQ) are provided as .tif files. The orientation maps\n are transformed using the inverse pole figure MTEX coloring [2] (given as\n IPF_mtex.jpg) and provided for the X (horizontal), Y (vertical), and Z\n (normal) directions. Additionally, all Kikuchi patterns were saved with no\n binning to 16-bit images under the .up2 format. Based on the diffraction\n patterns, sharpness maps, indicating the diffuseness of Kikuchi bands [3],\n have been constructed using EMSPHINX software [4] and are provided as .tif\n files. The details on the pattern center are provided in the .ang\n file. Kikuchi\n Patterns preprocessing: The Kikuchi patterns were\n originally acquired using 1 × 1 binning at a resolution of 480 × 480\n pixels. For the purpose of data processing, two versions are provided with\n the initial 1 × 1 binning and with a 4 × 4 binning (resulting in a reduced\n resolution of 120 × 120 pixels). Additional .up2 files, referred to as\n "preprocessed", are provided in which the background was\n subtracted and pattern gray values have been rescaled to fill the complete\n 16-bit range (between 0 and 65535). Due to the large size of the raw,\n unbinned data, they are not hosted on Dryad but can be made available upon\n request to the authors. Files\n Provided: The nomenclature of the provided files is\n described below, and a detailed explanation is available in the\n accompanying ReadMe.txt file, formatted according to DRYAD\n recommendations. The labels 718RX, AM718, and Waspalloy correspond to the\n wrought recrystallized Inconel 718, the as-built additively manufactured\n Inconel 718 (produced by DED), and a partially recrystallized Waspalloy,\n respectively. The term 1um refers to maps collected with a spatial\n resolution of 1 um, while 0.1um_1 and 0.1um_2 denote two separate maps\n acquired at 0.1 um resolution. The file labeled sharpness contains\n sharpness maps, as defined in [3], and computed using the EMSPHINX\n software [4]. Files labeled CI, IQ, and SEM represent the Confidence\n Index, Image Quality, and associated SEM maps obtained using MTEX1 and are\n provided as .tif files. Similarly, IPF_X, IPF_Y, and IPF_Z refer to\n inverse pole figure maps along the X (horizontal), Y (vertical), and Z\n (normal) directions and are provided as .jpg files. The file IPF_mtex\n gives the associated inverse pole figure MTEX coloring [1, 2]. 480x480 and\n 120x120 indicate the diffraction pattern resolutions, with the initial\n binning and with the 4 x 4 binning operation, respectively. All the images\n are stored as .up2 files. Files denoted as 120×120_preprocessed include\n the corresponding preprocessed patterns at the 120 x 120 resolution. The\n preprocessing procedure is detailed in the section "Kikuchi Patterns\n preprocessing." File\n format: The .up2 file is a proprietary data format\n used by EDAX/TSL systems to store Kikuchi pattern images and associated\n metadata from electron backscatter diffraction (EBSD) experiments. Each\n .up2 file contains high-resolution diffraction patterns acquired at each\n scan point, typically stored in a compressed or indexed form for efficient\n access. These files are commonly used when raw Kikuchi patterns are\n required for post-processing, including pattern remapping, machine\n learning applications, or simulation-based indexing. In addition to image\n data, .up2 files also include key acquisition parameters such as beam\n voltage, working distance, detector settings, image resolution, and stage\n coordinates, enabling full traceability of each pattern to its spatial\n location in the sample. The .ang file is a widely used text-based format\n for storing processed electron backscatter diffraction (EBSD) data.\n Generated by EDAX/TSL OIM software, it contains orientation mapping\n results after successful indexing of Kikuchi patterns. Each row in an .ang\n file corresponds to a single scan point and includes key information such\n as spatial coordinates (X, Y), Euler angles (Phi1, PHI, Phi2) defining\n crystallographic orientation, image quality (IQ), confidence index (CI),\n phase ID, and other optional metrics (e.g., grain ID or local\n misorientation). The file begins with a header that describes metadata,\n including step size, scan grid type (square or hexagonal), phase\n information, and scanning parameters. .ang files are commonly used for\n downstream analyses such as grain reconstruction, texture analysis, and\n misorientation mapping, and are often imported into visualization tools\n like MTEX toolbox1 or Dream.3D for further processing. The .tif (Tagged\n Image File Format) is a high-fidelity raster image format widely used in\n scientific imaging due to its ability to store uncompressed or losslessly\n compressed image data. In the context of EBSD datasets, .tif files\n typically store individual Kikuchi diffraction patterns collected during a\n scan. When used within a .up2 dataset, each pattern is saved as a separate\n .tif file, preserving the original grayscale intensity distribution\n necessary for accurate post-processing tasks such as reindexing, pattern\n matching, or machine learning-based classification. These images often\n have high bit-depth (e.g., 12-bit or 16-bit grayscale) to retain subtle\n contrast variations in the diffraction bands, which are critical for\n crystallographic orientation determination. The file naming and\n organization are indexed and referenced by the accompanying .up2 metadata\n file to maintain spatial correlation with the scan grid. The .jpg (or\n .jpeg), standing for Joint Photographic Experts Group, file format is a\n commonly used compressed image format designed to store photographic and\n continuous-tone images efficiently. .jpg uses lossy compression, meaning\n some image detail is discarded to significantly reduce file size. This\n makes it suitable for visual display and documentation purposes, but less\n ideal for quantitative image analysis, where preserving original pixel\n intensity values is critical. References:\n Bachmann, F., Hielscher, R. & Schaeben, H.\n Texture analysis with mtex–free and open source software toolbox.\n Solid state phenomena 160, 63–68 (2010).\n Nolze, G. & Hielscher, R. Orientations–perfectly\n colored. Appl. Crystallogr. 49, 1786–1802\n (2016). Wang, F. et al. Dislocation cells in\n additively manufactured metallic alloys characterized by electron\n backscatter diffraction pattern sharpness. Mater.\n Charact. 197, 112673 (2023). EMsoft-org.\n EMSphInx: Spherical indexing software for diffraction patterns. Public\n beta release; GPL-2.0 license. \n Acknowledgments: M.C., H.W., K.V., and J.C.S.\n are grateful for financial support from the Defense Advanced Research\n Projects Agency (DARPA - HR001124C0394). C.B., D.A., and J.C.S.\n acknowledge the NSF (award #2338346) for financial support. This work was\n carried out in the Materials Research Laboratory Central Research\n Facilities, University of Illinois. Carpenter Technology is acknowledged\n for providing the 718 and Waspalloy material. Tresa Pollock, McLean\n Echlin, and James Lamb are acknowledged for their support on the EBSD\n sharpness calculations."],"TechnicalInfo":["# Kikuchi pattern dataset from wrought and as-built additively\n manufactured superalloys Dataset DOI:\n [10.5061/dryad.zcrjdfnr9](10.5061/dryad.zcrjdfnr9) ## Description of the\n data and file structure #### Files Provided: See the Methods section for a\n description of file naming patterns and meaning. #### Folder architecture:\n 718RX: * 1um * 718RX_1um.ang * 718RX_1um_sharpness.tif * 718RX_1um_CI.tif\n * 718RX_1um_IQ.tif * 718RX_1um_SEM.tif * IPF_mtex.jpg *\n 718RX_1um_IPF_X.jpg * 718RX_1um_IPF_Y.jpg * 718RX_1um_IPF_Z.jpg *\n 718RX_1um_480x480.up2 * 718RX_1um_120x120.up2 *\n 718RX_1um_120x120_preprocessed.up2 * 0.1um_1 * 718RX_0.1um_1.ang *\n 718RX_0.1um_1_sharpness.tif * 718RX_0.1um_1_CI.tif * 718RX_0.1um_1_IQ.tif\n * 718RX_0.1um_1_SEM.tif * IPF_mtex.jpg * 718RX_0.1um_1_IPF_X.jpg *\n 718RX_0.1um_1_IPF_Y.jpg * 718RX_0.1um_1_IPF_Z.jpg *\n 718RX_0.1um_1_480x480.up2 * 718RX_0.1um_1_120x120.up2 *\n 718RX_0.1um_1_120x120_preprocessed.up2 * 0.1um_2 * 718RX_0.1um_2.ang *\n 718RX_0.1um_2_sharpness.tif * 718RX_0.1um_2_CI.tif * 718RX_0.1um_2_IQ.tif\n * 718RX_0.1um_2_SEM.tif * IPF_mtex.jpg * 718RX_0.1um_2_IPF_X.jpg *\n 718RX_0.1um_2_IPF_Y.jpg * 718RX_0.1um_2_IPF_Z.jpg *\n 718RX_0.1um_2_480x480.up2 * 718RX_0.1um_2_120x120.up2 *\n 718RX_0.1um_2_120x120_preprocessed.up2 AM718: * 1um * AM718_1um.ang *\n AM718_1um_sharpness.tif * AM718_1um_CI.tif * AM718_1um_IQ.tif *\n AM718_1um_SEM.tif * IPF_mtex.jpg * AM718_1um_IPF_X.jpg *\n AM718_1um_IPF_Y.jpg * AM718_1um_IPF_Z.jpg * AM718_1um_480x480.up2 *\n AM718_1um_120x120.up2 * AM718_1um_120x120_preprocessed.up2 * 0.1um_1 *\n AM718_0.1um_1.ang * AM718_0.1um_1_sharpness.tif * AM718_0.1um_1_CI.tif *\n AM718_0.1um_1_IQ.tif * AM718_0.1um_1_SEM.tif * IPF_mtex.jpg *\n AM718_0.1um_1_IPF_X.jpg * AM718_0.1um_1_IPF_Y.jpg *\n AM718_0.1um_1_IPF_Z.jpg * AM718_0.1um_1_480x480.up2 *\n AM718_0.1um_1_120x120.up2 * AM718_0.1um_1_120x120_preprocessed.up2 *\n 0.1um_2 * AM718_0.1um_2.ang * AM718_0.1um_2_sharpness.tif *\n AM718_0.1um_2_CI.tif * AM718_0.1um_2_IQ.tif * AM718_0.1um_2_SEM.tif *\n IPF_mtex.jpg * AM718_0.1um_2_IPF_X.jpg * AM718_0.1um_2_IPF_Y.jpg *\n AM718_0.1um_2_IPF_Z.jpg * AM718_0.1um_2_480x480.up2 *\n AM718_0.1um_2_120x120.up2 * AM718_0.1um_2_120x120_preprocessed.up2\n Waspalloy: * 1um * Waspalloy_1um.ang * Waspalloy_1um_sharpness.tif *\n Waspalloy_1um_CI.tif * Waspalloy_1um_IQ.tif * Waspalloy_1um_SEM.tif *\n IPF_mtex.jpg * Waspalloy_1um_IPF_X.jpg * Waspalloy_1um_IPF_Y.jpg *\n Waspalloy_1um_IPF_Z.jpg * Waspalloy_1um_480x480.up2 *\n Waspalloy_1um_120x120.up2 * Waspalloy_1um_120x120_preprocessed.up2 ##\n Code/software See Methods for recommendations on how to open files."]} 
    more » « less