Fluorescence microscopy imaging speed is fundamentally limited by the measurement signal-to-noise ratio (SNR). To improve image SNR for a given image acquisition rate, computational denoising techniques can be used to suppress noise. However, common techniques to estimate a denoised image from a single frame either are computationally expensive or rely on simple noise statistical models. These models assume Poisson or Gaussian noise statistics, which are not appropriate for many fluorescence microscopy applications that contain quantum shot noise and electronic Johnson–Nyquist noise, therefore a mixture of Poisson and Gaussian noise. In this paper, we show convolutional neural networks (CNNs) trained on mixed Poisson and Gaussian noise images to overcome the limitations of existing image denoising methods. The trained CNN is presented as an open-source ImageJ plugin that performs real-time image denoising (within tens of milliseconds) with superior performance (SNR improvement) compared to conventional fluorescence microscopy denoising methods. The method is validated on external datasets with out-of-distribution noise, contrast, structure, and imaging modalities from the training data and consistently achieves high-performance (
We consider the multi-target detection problem of estimating a two-dimensional target image from a large noisy measurement image that contains many randomly rotated and translated copies of the target image. Motivated by single-particle cryo-electron microscopy, we focus on the low signal-to-noise regime, where it is difficult to estimate the locations and orientations of the target images in the measurement. Our approach uses autocorrelation analysis to estimate rotationally and translationally invariant features of the target image. We demonstrate that, regardless of the level of noise, our technique can be used to recover the target image when the measurement is sufficiently large.
more » « less- NSF-PAR ID:
- 10392903
- Date Published:
- Journal Name:
- Inverse Problems and Imaging
- Volume:
- 0
- Issue:
- 0
- ISSN:
- 1930-8337
- Page Range / eLocation ID:
- 0
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
) denoising in less time than other fluorescence microscopy denoising methods. -
null (Ed.)Weak lensing measurements suffer from well-known shear estimation biases, which can be partially corrected for with the use of image simulations. In this work we present an analysis of simulated images that mimic Hubble Space Telescope/Advance Camera for Surveys observations of high-redshift galaxy clusters, including cluster specific issues such as non-weak shear and increased blending. Our synthetic galaxies have been generated to have similar observed properties as the background-selected source samples studied in the real images. First, we used simulations with galaxies placed on a grid to determine a revised signal-to-noise-dependent ( S / N KSB ) correction for multiplicative shear measurement bias, and to quantify the sensitivity of our KSB+ bias calibration to mismatches of galaxy or PSF properties between the real data and the simulations. Next, we studied the impact of increased blending and light contamination from cluster and foreground galaxies, finding it to be negligible for high-redshift ( z > 0.7) clusters, whereas shear measurements can be affected at the ∼1% level for lower redshift clusters given their brighter member galaxies. Finally, we studied the impact of fainter neighbours and selection bias using a set of simulated images that mimic the positions and magnitudes of galaxies in Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) data, thereby including realistic clustering. While the initial SExtractor object detection causes a multiplicative shear selection bias of −0.028 ± 0.002, this is reduced to −0.016 ± 0.002 by further cuts applied in our pipeline. Given the limited depth of the CANDELS data, we compared our CANDELS-based estimate for the impact of faint neighbours on the multiplicative shear measurement bias to a grid-based analysis, to which we added clustered galaxies to even fainter magnitudes based on Hubble Ultra Deep Field data, yielding a refined estimate of ∼ − 0.013. Our sensitivity analysis suggests that our pipeline is calibrated to an accuracy of ∼0.015 once all corrections are applied, which is fully sufficient for current and near-future weak lensing studies of high-redshift clusters. As an application, we used it for a refined analysis of three highly relaxed clusters from the South Pole Telescope Sunyaev-Zeldovich survey, where we now included measurements down to the cluster core ( r > 200 kpc) as enabled by our work. Compared to previously employed scales ( r > 500 kpc), this tightens the cluster mass constraints by a factor 1.38 on average.more » « less
-
null (Ed.)We present six new time-delay measurements obtained from R c -band monitoring data acquired at the Max Planck Institute for Astrophysics (MPIA) 2.2 m telescope at La Silla observatory between October 2016 and February 2020. The lensed quasars HE 0047−1756, WG 0214−2105, DES 0407−5006, 2M 1134−2103, PSJ 1606−2333, and DES 2325−5229 were observed almost daily at high signal-to-noise ratio to obtain high-quality light curves where we can record fast and small-amplitude variations of the quasars. We measured time delays between all pairs of multiple images with only one or two seasons of monitoring with the exception of the time delays relative to image D of PSJ 1606−2333. The most precise estimate was obtained for the delay between image A and image B of DES 0407−5006, where τ AB = −128.4 −3.8 +3.5 d (2.8% precision) including systematics due to extrinsic variability in the light curves. For HE 0047−1756, we combined our high-cadence data with measurements from decade-long light curves from previous COSMOGRAIL campaigns, and reach a precision of 0.9 d on the final measurement. The present work demonstrates the feasibility of measuring time delays in lensed quasars in only one or two seasons, provided high signal-to-noise ratio data are obtained at a cadence close to daily.more » « less
-
null (Ed.)Scientists use imaging to identify objects of interest and infer properties of these objects. The locations of these objects are often measured with error, which when ignored leads to biased parameter estimates and inflated variance. Current measurement error methods require an estimate or knowledge of the measurement error variance to correct these estimates, which may not be available. Instead, we create a spatial Bayesian hierarchical model that treats the locations as parameters, using the image itself to incorporate positional uncertainty. We lower the computational burden by approximating the likelihood using a noncontiguous block design around the object locations. We use this model to quantify the relationship between the intensity and displacement of hundreds of atom columns in crystal structures directly imaged via scanning transmission electron microscopy (STEM). Atomic displacements are related to important phenomena such as piezoelectricity, a property useful for engineering applications like ultrasound. Quantifying the sign and magnitude of this relationship will help materials scientists more precisely design materials with improved piezoelectricity. A simulation study confirms our method corrects bias in the estimate of the parameter of interest and drastically improves coverage in high noise scenarios compared to non-measurement error models.more » « less
-
Abstract Blood carries oxygen and nutrients to the trillions of cells in our body to sustain vital life processes. Lack of blood perfusion can cause irreversible cell damage. Therefore, blood perfusion measurement has widespread clinical applications. In this paper, we develop PulseCam — a new camera-based, motion-robust, and highly sensitive blood perfusion imaging modality with 1 mm spatial resolution and 1 frame-per-second temporal resolution. Existing camera-only blood perfusion imaging modality suffers from two core challenges: (i) motion artifact, and (ii) small signal recovery in the presence of large surface reflection and measurement noise. PulseCam addresses these challenges by robustly combining the video recording from the camera with a pulse waveform measured using a conventional pulse oximeter to obtain reliable blood perfusion maps in the presence of motion artifacts and outliers in the video recordings. For video stabilization, we adopt a novel brightness-invariant optical flow algorithm that helps us reduce error in blood perfusion estimate below 10% in different motion scenarios compared to 20–30% error when using current approaches. PulseCam can detect subtle changes in blood perfusion below the skin with at least two times better sensitivity, three times better response time, and is significantly cheaper compared to infrared thermography. PulseCam can also detect venous or partial blood flow occlusion that is difficult to identify using existing modalities such as the perfusion index measured using a pulse oximeter. With the help of a pilot clinical study, we also demonstrate that PulseCam is robust and reliable in an operationally challenging surgery room setting. We anticipate that PulseCam will be used both at the bedside as well as a point-of-care blood perfusion imaging device to visualize and analyze blood perfusion in an easy-to-use and cost-effective manner.