Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate. Segmentation models trained using supervised machine learning can excel at this task, their effectiveness is determined by the degree of overlap between the narrow distributions of image properties defined by the target dataset and highly specific training datasets, of which there are few. Attempts to broaden the distribution of existing eye image datasets through the inclusion of synthetic eye images have found that a model trained on synthetic images will often fail to generalize back to real-world eye images. In remedy, we use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data, and to prune the training dataset in a manner that maximizes distribution overlap. We demonstrate that our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
more »
« less
Deep learning for the automation of particle analysis in catalyst layers for polymer electrolyte fuel cells
The rapidly growing use of imaging infrastructure in the energy materials domain drives significant data accumulation in terms of their amount and complexity. The applications of routine techniques for image processing in materials research are often ad hoc , indiscriminate, and empirical, which renders the crucial task of obtaining reliable metrics for quantifications obscure. Moreover, these techniques are expensive, slow, and often involve several preprocessing steps. This paper presents a novel deep learning-based approach for the high-throughput analysis of the particle size distributions from transmission electron microscopy (TEM) images of carbon-supported catalysts for polymer electrolyte fuel cells. A dataset of 40 high-resolution TEM images at different magnification levels, from 10 to 100 nm scales, was annotated manually. This dataset was used to train the U-Net model, with the StarDist formulation for the loss function, for the nanoparticle segmentation task. StarDist reached a precision of 86%, recall of 85%, and an F1-score of 85% by training on datasets as small as thirty images. The segmentation maps outperform models reported in the literature for a similar problem, and the results on particle size analyses agree well with manual particle size measurements, albeit at a significantly lower cost.
more »
« less
- Award ID(s):
- 2046060
- PAR ID:
- 10432672
- Date Published:
- Journal Name:
- Nanoscale
- Volume:
- 14
- Issue:
- 1
- ISSN:
- 2040-3364
- Page Range / eLocation ID:
- 10 to 18
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In the field of materials science, microscopy is the first and often only accessible method for structural characterization. There is a growing interest in the development of machine learning methods that can automate the analysis and interpretation of microscopy images. Typically training of machine learning models requires large numbers of images with associated structural labels, however, manual labeling of images requires domain knowledge and is prone to human error and subjectivity. To overcome these limitations, we present a semi-supervised transfer learning approach that uses a small number of labeled microscopy images for training and performs as effectively as methods trained on significantly larger image datasets. Specifically, we train an image encoder with unlabeled images using self-supervised learning methods and use that encoder for transfer learning of different downstream image tasks (classification and segmentation) with a minimal number of labeled images for training. We test the transfer learning ability of two self-supervised learning methods: SimCLR and Barlow-Twins on transmission electron microscopy (TEM) images. We demonstrate in detail how this machine learning workflow applied to TEM images of protein nanowires enables automated classification of nanowire morphologies ( e.g. , single nanowires, nanowire bundles, phase separated) as well as segmentation tasks that can serve as groundwork for quantification of nanowire domain sizes and shape analysis. We also extend the application of the machine learning workflow to classification of nanoparticle morphologies and identification of different type of viruses from TEM images.more » « less
-
Tomaszewski, John E.; Ward, Aaron D. (Ed.)Automatic cell quantification in microscopy images can accelerate biomedical research. There has been significant progress in the 3D segmentation of neurons in fluorescence microscopy. However, it remains a challenge in bright-field microscopy due to the low Signal-to-Noise Ratio and signals from out-of-focus neurons. Automatic neuron counting in bright-field z-stacks is often performed on Extended Depth of Field images or on only one thick focal plane image. However, resolving overlapping cells that are located at different z-depths is a challenge. The overlap can be resolved by counting every neuron in its best focus z-plane because of their separation on the z-axis. Unbiased stereology is the state-of-the-art for total cell number estimation. The segmentation boundary for cells is required in order to incorporate the unbiased counting rule for stereology application. Hence, we perform counting via segmentation. We propose to achieve neuron segmentation in the optimal focal plane by posing the binary segmentation task as a multi-class multi-label task. Also, we propose to efficiently use a 2D U-Net for inter-image feature learning in a Multiple Input Multiple Output system that poses a binary segmentation task as a multi-class multi-label segmentation task. We demonstrate the accuracy and efficiency of the MIMO approach using a bright-field microscopy z-stack dataset locally prepared by an expert. The proposed MIMO approach is also validated on a dataset from the Cell Tracking Challenge achieving comparable results to a compared method equipped with memory units. Our z-stack dataset is available atmore » « less
-
Understanding how particle size and morphology influence ion insertion dynamics is critical for a wide range of electrochemical applications including energy storage and electrochromic smart windows. One strategy to reveal such structure–property relationships is to perform ex situ transmission electron microscopy (TEM) of nanoparticles that have been cycled on TEM grid electrodes. One drawback of this approach is that images of some particles are correlated with the electrochemical response of the entire TEM grid electrode. The lack of one-to-one electrochemical-to-structural information complicates interpretation of genuine structure/property relationships. Developing high-throughput ex situ single particle-level analytical techniques that effectively link electrochemical behavior with structural properties could accelerate the discovery of critical structure-property relationships. Here, using Li-ion insertion in WO 3 nanorods as a model system, we demonstrate a correlated optically-detected electrochemistry and TEM technique that measures electrochemical behavior of via many particles simultaneously without having to make electrical contacts to single particles on the TEM grid. This correlated optical-TEM approach can link particle structure with electrochemical behavior at the single particle-level. Our measurements revealed significant electrochemical activity heterogeneity among particles. Single particle activity correlated with distinct local mechanical or electrical properties of the amorphous carbon film of the TEM grid, leading to active and inactive particles. The results are significant for correlated electrochemical/TEM imaging studies that aim to reveal structure-property relationships using single particle-level imaging and ensemble-level electrochemistry.more » « less
-
Segmenting autophagic bodies in yeast TEM images is a key technique for measuring changes in autophagosome size and number in order to better understand macroautophagy/autophagy. Manual segmentation of these images can be very time consuming, particularly because hundreds of images are needed for accurate measurements. Here we describe a validated Cellpose 2.0 model that can segment these images with accuracy comparable to that of human experts. This model can be used for fully automated segmentation, eliminating the need for manual body outlining, or for model-assisted segmentation, which allows human oversight but is still five times as fast as the current manual method. The model is specific to segmentation of autophagic bodies in yeast TEM images, but researchers working in other systems can use a similar process to generate their own Cellpose 2.0 models to attempt automated segmentations. Our model and instructions for its use are presented here for the autophagy community.more » « less
An official website of the United States government

