Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images’ size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model’s results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings.
more »
« less
CNN-Modified Encoders in U-Net for Nuclei Segmentation and Quantification of Fluorescent Images
This research introduces an advanced approach to automate the segmentation and quantification of nuclei in fluorescent images through deep learning techniques. Overcoming inherent challenges such as variations in pixel intensities, noisy boundaries, and overlapping edges, our devised pipeline integrates the U-Net architecture with state-of-the-art CNN models, such as EfficientNet. This fusion maintains the efficiency of U-Net while harnessing the superior capabilities of EfficientNet. Crucially, we exclusively utilize high-quality confocal images generated in-house for model training, purposefully avoiding the pitfalls associated with publicly available synthetic data of lower quality. Our training dataset encompasses over 3000 nuclei boundaries, which are meticulously annotated manually to ensure precision and accuracy in the learning process. Additionally, post-processing is implemented to refine segmentation results, providing morphological quantification for each segmented nucleus. Through comprehensive evaluation, our model achieves notable performance metrics, attaining an F1-score of 87% and an Intersection over Union (IoU) value of 80%. Furthermore, its robustness is demonstrated across diverse datasets sourced from various origins, indicative of its broad applicability in automating nucleus extraction and quantification from fluorescent images. This innovative methodology holds significant promise for advancing research efforts across multiple domains by facilitating a deeper understanding of underlying biological processes through automated analysis of fluorescent imagery.
more »
« less
- Award ID(s):
- 2344476
- PAR ID:
- 10560623
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Access
- Volume:
- 12
- ISSN:
- 2169-3536
- Page Range / eLocation ID:
- 107089 to 107097
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Background: In the study of early cardiac development, it is essential to acquire accurate volume changes of the heart chambers. Although advanced imaging techniques, such as light-sheet fluorescent microscopy (LSFM), provide an accurate procedure for analyzing the heart structure, rapid, and robust segmentation is required to reduce laborious time and accurately quantify developmental cardiac mechanics. Methods: The traditional biomedical analysis involving segmentation of the intracardiac volume occurs manually, presenting bottlenecks due to enormous data volume at high axial resolution. Our advanced deep-learning techniques provide a robust method to segment the volume within a few minutes. Our U-net-based segmentation adopted manually segmented intracardiac volume changes as training data and automatically produced the other LSFM zebrafish cardiac motion images. Results: Three cardiac cycles from 2 to 5 days postfertilization (dpf) were successfully segmented by our U-net-based network providing volume changes over time. In addition to understanding each of the two chambers' cardiac function, the ventricle and atrium were separated by 3D erode morphology methods. Therefore, cardiac mechanical properties were measured rapidly and demonstrated incremental volume changes of both chambers separately. Interestingly, stroke volume (SV) remains similar in the atrium while that of the ventricle increases SV gradually. Conclusion: Our U-net-based segmentation provides a delicate method to segment the intricate inner volume of the zebrafish heart during development, thus providing an accurate, robust, and efficient algorithm to accelerate cardiac research by bypassing the labor-intensive task as well as improving the consistency in the results.more » « less
-
null (Ed.)We propose a novel weakly supervised method to improve the boundary of the 3D segmented nuclei utilizing an oversegmented image. This is motivated by the observation that current state-of-the-art deep learning methods do not result in accurate boundaries when the training data is weakly annotated. Towards this, a 3D U-Net is trained to get the centroid of the nuclei and integrated with a simple linear iterative clustering (SLIC) supervoxel algorithm that provides better adherence to cluster boundaries. To track these segmented nuclei, our algorithm utilizes the relative nuclei location depicting the processes of nuclei division and apoptosis. The proposed algorithmic pipeline achieves better segmentation performance compared to the state-of-the-art method in Cell Tracking Challenge (CTC) 2019 and comparable performance to state-of-the-art methods in IEEE ISBI CTC2020 while utilizing very few pixel-wise annotated data. Detailed experimental results are provided, and the source code is available on GitHub.more » « less
-
This work presents a novel deep learning architecture called BNU-Net for the purpose of cardiac segmentation based on short-axis MRI images. Its name is derived from the Batch Normalized (BN) U-Net architecture for medical image segmentation. New generations of deep neural networks (NN) are called convolutional NN (CNN). CNNs like U-Net have been widely used for image classification tasks. CNNs are supervised training models which are trained to learn hierarchies of features automatically and robustly perform classification. Our architecture consists of an encoding path for feature extraction and a decoding path that enables precise localization. We compare this approach with a parallel approach named U-Net. Both BNU-Net and U-Net are cardiac segmentation approaches: while BNU-Net employs batch normalization to the results of each convolutional layer and applies an exponential linear unit (ELU) approach that operates as activation function, U-Net does not apply batch normalization and is based on Rectified Linear Units (ReLU). The presented work (i) facilitates various image preprocessing techniques, which includes affine transformations and elastic deformations, and (ii) segments the preprocessed images using the new deep learning architecture. We evaluate our approach on a dataset containing 805 MRI images from 45 patients. The experimental results reveal that our approach accomplishes comparable or better performance than other state-of-the-art approaches in terms of the Dice coefficient and the average perpendicular distance.more » « less
-
This work presents a novel deep learning architecture called BNU-Net for the purpose of cardiac segmentation based on short-axis MRI images. Its name is derived from the Batch Normalized (BN) U-Net architecture for medical image segmentation. New generations of deep neural networks (NN) are called convolutional NN (CNN). CNNs like U-Net have been widely used for image classification tasks. CNNs are supervised training models which are trained to learn hierarchies of features automatically and robustly perform classification. Our architecture consists of an encoding path for feature extraction and a decoding path that enables precise localization. We compare this approach with a parallel approach named U-Net. Both BNU-Net and U-Net are cardiac segmentation approaches: while BNU-Net employs batch normalization to the results of each convolutional layer and applies an exponential linear unit (ELU) approach that operates as activation function, U-Net does not apply batch normalization and is based on Rectified Linear Units (ReLU). The presented work (i) facilitates various image preprocessing techniques, which includes affine transformations and elastic deformations, and (ii) segments the preprocessed images using the new deep learning architecture. We evaluate our approach on a dataset containing 805 MRI images from 45 patients. The experimental results reveal that our approach accomplishes comparable or better performance than other state-of-the-art approaches in terms of the Dice coefficient and the average perpendicular distance. Index Terms—Magnetic Resonance Imaging; Batch Normalization; Exponential Linear Unitsmore » « less