Structured illumination microscopy (SIM) reconstructs optically-sectioned images of a sample from multiple spatially-patterned wide-field images, but the traditional single non-patterned wide-field images are more inexpensively obtained since they do not require generation of specialized illumination patterns. In this work, we translated wide-field fluorescence microscopy images to optically-sectioned SIM images by a Pix2Pix conditional generative adversarial network (cGAN). Our model shows the capability of both 2D cross-modality image translation from wide-field images to optical sections, and further demonstrates potential to recover 3D optically-sectioned volumes from wide-field image stacks. The utility of the model was tested on a variety of samples including fluorescent beads and fresh human tissue samples.
more »
« less
A computer vision enhanced smart phone platform for microfluidic urine glucometry
An innovative disposable microfluidic device was designed, created, and mounted in a 3D-printed chassis to capture images. The images were processed using a custom detector which automatically identifies target glucose strips and colorimetric values.
more »
« less
- Award ID(s):
- 1846740
- PAR ID:
- 10501867
- Publisher / Repository:
- Analyst
- Date Published:
- Journal Name:
- The Analyst
- Volume:
- 149
- Issue:
- 6
- ISSN:
- 0003-2654
- Page Range / eLocation ID:
- 1719 to 1726
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Underwater image enhancement is often perceived as a disadvantageous process to object detection. We propose a novel analysis of the interactions between enhancement and detection, elaborating on the potential of enhancement to improve detection. In particular, we evaluate object detection performance for each individual image rather than across the entire set to allow a direct performance comparison of each image before and after enhancement. This approach enables the generation of unique queries to identify the outperforming and underperforming enhanced images compared to the original images. To accomplish this, we first produce enhanced image sets of the original images using recent image enhancement models. Each enhanced set is then divided into two groups: (1) images that outperform or match the performance of the original images and (2) images that underperform. Subsequently, we create mixed original-enhanced sets by replacing underperforming enhanced images with their corresponding original images. Next, we conduct a detailed analysis by evaluating all generated groups for quality and detection performance attributes. Finally, we perform an overlap analysis between the generated enhanced sets to identify cases where the enhanced images of different enhancement algorithms unanimously outperform, equally perform, or underperform the original images. Our analysis reveals that, when evaluated individually, most enhanced images achieve equal or superior performance compared to their original counterparts. The proposed method uncovers variations in detection performance that are not apparent in a whole set as opposed to a per-image evaluation because the latter reveals that only a small percentage of enhanced images cause an overall negative impact on detection. We also find that over-enhancement may lead to deteriorated object detection performance. Lastly, we note that enhanced images reveal hidden objects that were not annotated due to the low visibility of the original images.more » « less
-
Abstract We demonstrate the use of an eigenbasis that is derived from principal component analysis (PCA) applied on an ensemble of random-noise images that have a “red” power spectrum; i.e., a spectrum that decreases smoothly from large to small spatial scales. The pattern of the resulting eigenbasis allows for the reconstruction of images with a broad range of image morphologies. In particular, we show that this general eigenbasis can be used to efficiently reconstruct images that resemble possible astronomical sources for interferometric observations, even though the images in the original ensemble used to generate the PCA basis are significantly different from the astronomical images. We further show that the efficiency and fidelity of the image reconstructions depends only weakly on the particular parameters of the red-noise power spectrum used to generate the ensemble of images.more » « less
-
Image-based localization has been widely used for autonomous vehicles, robotics, augmented reality, etc., and this is carried out by matching a query image taken from a cell phone or vehicle dashcam to a large scale of geo-tagged reference images, such as satellite/aerial images or Google Street Views. However, the problem remains challenging due to the inconsistency between the query images and the large-scale reference datasets regarding various light and weather conditions. To tackle this issue, this work proposes a novel view synthesis framework equipped with deep generative models, which can merge the unique features from the outdated reference dataset with features from the images containing seasonal changes. Our design features a unique scheme to ensure that the synthesized images contain the important features from both reference and patch images, covering seasonable features and minimizing the gap for the image-based localization tasks. The performance evaluation shows that the proposed framework can synthesize the views in various weather and lighting conditions.more » « less
-
Abstract Automated particle segmentation and feature analysis of experimental image data are indispensable for data-driven material science. Deep learning-based image segmentation algorithms are promising techniques to achieve this goal but are challenging to use due to the acquisition of a large number of training images. In the present work, synthetic images are applied, resembling the experimental images in terms of geometrical and visual features, to train the state-of-art Mask region-based convolutional neural networks to segment vanadium pentoxide nanowires, a cathode material within optical density-based images acquired using spectromicroscopy. The results demonstrate the instance segmentation power in real optical intensity-based spectromicroscopy images of complex nanowires in overlapped networks and provide reliable statistical information. The model can further be used to segment nanowires in scanning electron microscopy images, which are fundamentally different from the training dataset known to the model. The proposed methodology can be extended to any optical intensity-based images of variable particle morphology, material class, and beyond.more » « less
An official website of the United States government

