We propose a novel cell segmentation approach by extracting Multi-exposure Maximally Stable Extremal Regions (MMSER) in phase contrast microscopy images on the same cell dish. Using our method, cell regions can be well identified by considering the maximally stable regions with response to different camera exposure times. Meanwhile, halo artifacts with regard to cells at different stages are leveraged to identify cells' stages. The experimental results validate that high quality cell segmentation and cell stage classification can be achieved by our approach.
more »
« less
Synchronized strobed phase contrast and fluorescence microscopy: the interlaced standard reimagined
We propose a simple, cost-effective method for synchronized phase contrast and fluorescence video acquisition in live samples. Counter-phased pulses of phase contrast illumination and fluorescence excitation light are synchronized with the exposure of the two fields of an interlaced camera sensor. This results in a video sequence in which each frame contains both exposure modes, each in half of its pixels. The method allows real-time acquisition and display of synchronized and spatially aligned phase contrast and fluorescence image sequences that can be separated by de-interlacing in two independent videos. The method can be implemented on any fluorescence microscope with a camera port without needing to modify the optical path.
more »
« less
- Award ID(s):
- 2146519
- PAR ID:
- 10394224
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Optics Express
- Volume:
- 31
- Issue:
- 4
- ISSN:
- 1094-4087; OPEXFF
- Format(s):
- Medium: X Size: Article No. 5167
- Size(s):
- Article No. 5167
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This is the first of two articles on the Extant Life Volumetric Imaging System (ELVIS) describing a combined digital holographic microscope (DHM) and a fluorescence light-field microscope (FLFM). The instrument is modular and robust enough for field use. Each mode uses its own illumination source and camera, but both microscopes share a common objective lens and sample viewing chamber. This allows correlative volumetric imaging in amplitude, quantitative phase, and fluorescence modes. A detailed schematic and parts list is presented, as well as links to open-source software packages for data acquisition and analysis that permits interested researchers to duplicate the design. Instrument performance is quantified using test targets and beads. In the second article on ELVIS, to be published in the next issue of Microscopy Today , analysis of data from field tests and images of microorganisms will be presented.more » « less
-
Abstract Due to its specificity, fluorescence microscopy has become a quintessential imaging tool in cell biology. However, photobleaching, phototoxicity, and related artifacts continue to limit fluorescence microscopy’s utility. Recently, it has been shown that artificial intelligence (AI) can transform one form of contrast into another. We present phase imaging with computational specificity (PICS), a combination of quantitative phase imaging and AI, which provides information about unlabeled live cells with high specificity. Our imaging system allows for automatic training, while inference is built into the acquisition software and runs in real-time. Applying the computed fluorescence maps back to the quantitative phase imaging (QPI) data, we measured the growth of both nuclei and cytoplasm independently, over many days, without loss of viability. Using a QPI method that suppresses multiple scattering, we measured the dry mass content of individual cell nuclei within spheroids. In its current implementation, PICS offers a versatile quantitative technique for continuous simultaneous monitoring of individual cellular components in biological applications where long-term label-free imaging is desirable.more » « less
-
Abstract The rapid development of scientific CMOS (sCMOS) technology has greatly advanced optical microscopy for biomedical research with superior sensitivity, resolution, field-of-view, and frame rates. However, for sCMOS sensors, the parallel charge-voltage conversion and different responsivity at each pixel induces extra readout and pattern noise compared to charge-coupled devices (CCD) and electron-multiplying CCD (EM-CCD) sensors. This can produce artifacts, deteriorate imaging capability, and hinder quantification of fluorescent signals, thereby compromising strategies to reduce photo-damage to live samples. Here, we propose a content-adaptive algorithm for the automatic correction of sCMOS-related noise (ACsN) for fluorescence microscopy. ACsN combines camera physics and layered sparse filtering to significantly reduce the most relevant noise sources in a sCMOS sensor while preserving the fine details of the signal. The method improves the camera performance, enabling fast, low-light and quantitative optical microscopy with video-rate denoising for a broad range of imaging conditions and modalities.more » « less
-
Access to high-quality data is an important barrier in the digital analysis of urban settings, including applications within computer vision and urban design. Diverse forms of data collected from sensors in areas of high activity in the urban environment, particularly at street intersections, are valuable resources for researchers interpreting the dynamics between vehicles, pedestrians, and the built environment. In this paper, we present a high-resolution audio, video, and LiDAR dataset of three urban intersections in Brooklyn, New York, totaling almost 8 unique hours. The data were collected with custom Reconfigurable Environmental Intelligence Platform (REIP) sensors that were designed with the ability to accurately synchronize multiple video and audio inputs. The resulting data are novel in that they are inclusively multimodal, multi-angular, high-resolution, and synchronized. We demonstrate four ways the data could be utilized — (1) to discover and locate occluded objects using multiple sensors and modalities, (2) to associate audio events with their respective visual representations using both video and audio modes, (3) to track the amount of each type of object in a scene over time, and (4) to measure pedestrian speed using multiple synchronized camera views. In addition to these use cases, our data are available for other researchers to carry out analyses related to applying machine learning to understanding the urban environment (in which existing datasets may be inadequate), such as pedestrian-vehicle interaction modeling and pedestrian attribute recognition. Such analyses can help inform decisions made in the context of urban sensing and smart cities, including accessibility-aware urban design and Vision Zero initiatives.more » « less
An official website of the United States government
