Generative Adversarial Networks (GANs) have shown stupendous power in generating realistic images to an extend that human eyes are not capable of recognizing them as synthesized. State-of-the-art GAN models are capable of generating realistic and high-quality images, which promise unprecedented opportunities for generating design concepts. Yet, the preliminary experiments reported in this paper shed light on a fundamental limitation of GANs for generative design: lack of novelty and diversity in generated samples. This article conducts a generative design study on a large-scale sneaker dataset based on StyleGAN, a state-of-the-art GAN architecture, to advance the understanding of the performance of these generative models in generating novel and diverse samples (i.e., sneaker images). The findings reveal that although StyleGAN can generate samples with quality and realism, the generated and style-mixed samples highly resemble the training dataset (i.e., existing sneakers). This article aims to provide future research directions and insights for the engineering design community to further realize the untapped potentials of GANs for generative design.
more »
« less
Benchmark Tests of Atom Segmentation Deep Learning Models with a Consistent Dataset
Abstract The information content of atomic-resolution scanning transmission electron microscopy (STEM) images can often be reduced to a handful of parameters describing each atomic column, chief among which is the column position. Neural networks (NNs) are high performance, computationally efficient methods to automatically locate atomic columns in images, which has led to a profusion of NN models and associated training datasets. We have developed a benchmark dataset of simulated and experimental STEM images and used it to evaluate the performance of two sets of recent NN models for atom location in STEM images. Both models exhibit high performance for images of varying quality from several different crystal lattices. However, there are important differences in performance as a function of image quality, and both models perform poorly for images outside the training data, such as interfaces with large difference in background intensity. Both the benchmark dataset and the models are available using the Foundry service for dissemination, discovery, and reuse of machine learning models.
more »
« less
- Award ID(s):
- 1931298
- PAR ID:
- 10465237
- Date Published:
- Journal Name:
- Microscopy and Microanalysis
- Volume:
- 29
- Issue:
- 2
- ISSN:
- 1431-9276
- Page Range / eLocation ID:
- 552 to 562
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This work presents a novel deep learning architecture called BNU-Net for the purpose of cardiac segmentation based on short-axis MRI images. Its name is derived from the Batch Normalized (BN) U-Net architecture for medical image segmentation. New generations of deep neural networks (NN) are called convolutional NN (CNN). CNNs like U-Net have been widely used for image classification tasks. CNNs are supervised training models which are trained to learn hierarchies of features automatically and robustly perform classification. Our architecture consists of an encoding path for feature extraction and a decoding path that enables precise localization. We compare this approach with a parallel approach named U-Net. Both BNU-Net and U-Net are cardiac segmentation approaches: while BNU-Net employs batch normalization to the results of each convolutional layer and applies an exponential linear unit (ELU) approach that operates as activation function, U-Net does not apply batch normalization and is based on Rectified Linear Units (ReLU). The presented work (i) facilitates various image preprocessing techniques, which includes affine transformations and elastic deformations, and (ii) segments the preprocessed images using the new deep learning architecture. We evaluate our approach on a dataset containing 805 MRI images from 45 patients. The experimental results reveal that our approach accomplishes comparable or better performance than other state-of-the-art approaches in terms of the Dice coefficient and the average perpendicular distance.more » « less
-
This work presents a novel deep learning architecture called BNU-Net for the purpose of cardiac segmentation based on short-axis MRI images. Its name is derived from the Batch Normalized (BN) U-Net architecture for medical image segmentation. New generations of deep neural networks (NN) are called convolutional NN (CNN). CNNs like U-Net have been widely used for image classification tasks. CNNs are supervised training models which are trained to learn hierarchies of features automatically and robustly perform classification. Our architecture consists of an encoding path for feature extraction and a decoding path that enables precise localization. We compare this approach with a parallel approach named U-Net. Both BNU-Net and U-Net are cardiac segmentation approaches: while BNU-Net employs batch normalization to the results of each convolutional layer and applies an exponential linear unit (ELU) approach that operates as activation function, U-Net does not apply batch normalization and is based on Rectified Linear Units (ReLU). The presented work (i) facilitates various image preprocessing techniques, which includes affine transformations and elastic deformations, and (ii) segments the preprocessed images using the new deep learning architecture. We evaluate our approach on a dataset containing 805 MRI images from 45 patients. The experimental results reveal that our approach accomplishes comparable or better performance than other state-of-the-art approaches in terms of the Dice coefficient and the average perpendicular distance. Index Terms—Magnetic Resonance Imaging; Batch Normalization; Exponential Linear Unitsmore » « less
-
For decades, atomistic modeling has played a crucial role in predicting the behavior of materials in numerous fields ranging from nanotechnology to drug discovery. The most accurate methods in this domain are rooted in first-principles quantum mechanical calculations such as density functional theory (DFT). Because these methods have remained computationally prohibitive, practitioners have traditionally focused on defining physically motivated closed-form expressions known as empirical interatomic potentials (EIPs) that approximately model the interactions between atoms in materials. In recent years, neural network (NN)-based potentials trained on quantum mechanical (DFT-labeled) data have emerged as a more accurate alternative to conventional EIPs. However, the generalizability of these models relies heavily on the amount of labeled training data, which is often still insufficient to generate models suitable for general-purpose applications. In this paper, we propose two generic strategies that take advantage of unlabeled training instances to inject domain knowledge from conventional EIPs to NNs in order to increase their generalizability. The first strategy, based on weakly supervised learning, trains an auxiliary classifier on EIPs and selects the best-performing EIP to generate energies to supplement the ground-truth DFT energies in training the NN. The second strategy, based on transfer learning, first pretrains the NN on a large set of easily obtainable EIP energies, and then fine-tunes it on ground-truth DFT energies. Experimental results on three benchmark datasets demonstrate that the first strategy improves baseline NN performance by 5% to 51% while the second improves baseline performance by up to 55%. Combining them further boosts performance.more » « less
-
Faggioli, G; Ferro, N; Galuščáková, P; Herrera, A (Ed.)In the ever-changing realm of medical image processing, ImageCLEF brought a newdimension with the Identifying GAN Fingerprint task, catering to the advancement of visual media analysis. This year, the author presented the task of detecting training image fingerprints to control the quality of synthetic images for the second time (as task 1) and introduced the task of detecting generative model fingerprints for the first time (as task 2). Both tasks are aimed at discerning these fingerprints from images, on both real training images and the generative models. The dataset utilized encompassed 3D CT images of lung tuberculosis patients, with the development dataset featuring a mix of real and generated images, and the test dataset. Our team ’CSMorgan’ contributed several approaches, leveraging multiformer (combined feature extracted using BLIP2 and DINOv2) networks, additive and mode thresholding techniques, and late fusion methodologies, bolstered by morphological operations. In Task 1, our optimal performance was attained through a late fusion-based reranking strategy, achieving an F1 score of 0.51, while the additive average thresholding approach closely followed with a score of 0.504. In Task 2, our multiformer model garnered an impressive Adjusted Rand Index (ARI) score of 0.90, and a fine-tuned variant of the multiformer yielded a score of 0.8137. These outcomes underscore the efficacy of the multiformer-based approach in accurately discerning both real image and generative model fingerprints.more » « less
An official website of the United States government

