skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Single image multi-scale enhancement for rock Micro-CT super-resolution using residual U-Net
Micro-CT, also known as X-ray micro-computed tomography, has emerged as the primary instrument for pore-scale properties study in geological materials. Several studies have used deep learning to achieve super-resolution reconstruction in order to balance the trade-off between resolution of CT images and field of view. Nevertheless, most existing methods only work with single-scale CT scans, ignoring the possibility of using multi-scale image features for image reconstruction. In this study, we proposed a super-resolution approach via multi-scale fusion using residual U-Net for rock micro-CT image reconstruction (MS-ResUnet). The residual U-Net provides an encoder-decoder structure. In each encoder layer, several residual sequential blocks and improved residual blocks are used. The decoder is composed of convolutional ReLU residual blocks and residual chained pooling blocks. During the encoding-decoding method, information transfers between neighboring multi-resolution images are fused, resulting in richer rock characteristic information. Qualitative and quantitative comparisons of sandstone, carbonate, and coal CT images demonstrate that our proposed algorithm surpasses existing approaches. Our model accurately reconstructed the intricate details of pores in carbonate and sandstone, as well as clearly visible coal cracks.  more » « less
Award ID(s):
1946231
PAR ID:
10510488
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ELSEVIER
Date Published:
Journal Name:
Applied Computing and Geosciences
Volume:
22
Issue:
C
ISSN:
2590-1974
Page Range / eLocation ID:
100165
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Pore-scale modeling is essential in understanding and predicting flow and transport properties of rocks. Generally, pore-scale modeling is dependent on imaging technologies such as Micro Computed Tomography (micro-CT), which provides visual confirmation into the pore microstructures of rocks at a representative scale. However, this technique is limited in the ability to provide high resolution images showing the pore-throats connecting pore bodies. Pore scale simulations of flow and transport properties of rocks are generally done on a single 3D pore microstructure image. As such, the simulated properties are only representative of the simulated pore-scale rock volume. These are the technological and computational limitations which we address here by using a stochastic pore-scale simulation approach. This approach consists of constructing hundreds of 3D pore microstructures of the same pore size distribution and overall porosity but different pore connectivity. The construction of the 3D pore microstructures incorporates the use of Mercury Injection Capillary Pressure (MICP) data to account for pore throat size distribution, and micro-CT images to account for pore body size distribution. The approach requires a small micro-CT image volume (7–19 mm3) to reveal key pore microstructural features that control flow and transport properties of highly heterogeneous rocks at the core-scale. Four carbonate rock samples were used to test the proposed approach. Permeability calculations from the introduced approach were validated by comparing laboratory measured permeability of rock cores and permeability estimated using five well-known core-scale empirical model equations. The results show that accounting for the stochastic connectivity of pores results in a probabilistic distribution of flow properties which can be used to upscale pore-scale simulated flow properties to the core-scale. The use of the introduced stochastic pore-scale simulation approach is more beneficial when there is a higher degree of heterogeneity in pore size distribution. This is shown to be the case with permeability and hydraulic tortuosity which are key controls of flow and transport processes in rocks. 
    more » « less
  2. Deep learning (DL) has been increasingly explored in low-dose CT image denoising. DL products have also been submitted to the FDA for premarket clearance. While having the potential to improve image quality over the filtered back projection method (FBP) and produce images quickly, generalizability of DL approaches is a major concern because the performance of a DL network can depend highly on the training data. In this work we take a residual encoder-decoder convolutional neural network (REDCNN)-based CT denoising method as an example. We investigate the effect of the scan parameters associated with the training data on the performance of this DL-based CT denoising method and identifies the scan parameters that may significantly impact its performance generalizability. This abstract particularly examines these three parameters: reconstruction kernel, dose level and slice thickness. Our preliminary results indicate that the DL network may not generalize well between FBP reconstruction kernels, but is insensitive to slice thickness for slice-wise denoising. The results also suggest that training with mixed dose levels improves denoising performance. 
    more » « less
  3. In accelerated MRI reconstruction, the anatomy of a patient is recovered from a set of under-sampled and noisy measurements. Deep learning approaches have been proven to be successful in solving this ill-posed inverse problem and are capable of producing very high quality reconstructions. However, current architectures heavily rely on convolutions, that are content-independent and have difficulties modeling long-range dependencies in images. Recently, Transformers, the workhorse of contemporary natural language processing, have emerged as powerful building blocks for a multitude of vision tasks. These models split input images into nonoverlapping patches, embed the patches into lower-dimensional tokens and utilize a self-attention mechanism that does not suffer from the aforementioned weaknesses of convolutional architectures. However, Transformers incur extremely high compute and memory cost when 1) the input image resolution is high and 2) when the image needs to be split into a large number of patches to preserve fine detail information, both of which are typical in low-level vision problems such as MRI reconstruction, having a compounding effect. To tackle these challenges, we propose HUMUS-Net, a hybrid architecture that combines the beneficial implicit bias and efficiency of convolutions with the power of Transformer blocks in an unrolled and multi-scale network. HUMUS-Net extracts high-resolution features via convolutional blocks and refines low-resolution features via a novel Transformer-based multi-scale feature extractor. Features from both levels are then synthesized into a high-resolution output reconstruction. Our network establishes new state of the art on the largest publicly available MRI dataset, the fastMRI dataset. We further demonstrate the performance of HUMUS-Net on two other popular MRI datasets and perform fine-grained ablation studies to validate our design. 
    more » « less
  4. Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images’ size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model’s results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings. 
    more » « less
  5. Optical coherence tomography (OCT) has stimulated a wide range of medical image-based diagnosis and treatment in fields such as cardiology and ophthalmology. Such applications can be further facilitated by deep learning-based super-resolution technology, which improves the capability of resolving morphological structures. However, existing deep learning-based method only focuses on spatial distribution and disregards frequency fidelity in image reconstruction, leading to a frequency bias. To overcome this limitation, we propose a frequency-aware super-resolution framework that integrates three critical frequency-based modules (i.e., frequency transformation, frequency skip connection, and frequency alignment) and frequency-based loss function into a conditional generative adversarial network (cGAN). We conducted a large-scale quantitative study from an existing coronary OCT dataset to demonstrate the superiority of our proposed framework over existing deep learning frameworks. In addition, we confirmed the generalizability of our framework by applying it to fish corneal images and rat retinal images, demonstrating its capability to super-resolve morphological details in eye imaging. 
    more » « less