skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Multi-target detection with rotations
We consider the multi-target detection problem of estimating a two-dimensional target image from a large noisy measurement image that contains many randomly rotated and translated copies of the target image. Motivated by single-particle cryo-electron microscopy, we focus on the low signal-to-noise regime, where it is difficult to estimate the locations and orientations of the target images in the measurement. Our approach uses autocorrelation analysis to estimate rotationally and translationally invariant features of the target image. We demonstrate that, regardless of the level of noise, our technique can be used to recover the target image when the measurement is sufficiently large.  more » « less
Award ID(s):
1837992 2009753 1903015
PAR ID:
10392903
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Inverse Problems and Imaging
Volume:
0
Issue:
0
ISSN:
1930-8337
Page Range / eLocation ID:
0
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The generalized contrast-to-noise ratio (gCNR) is a relatively new image quality metric designed to assess the probability of lesion detectability in ultrasound images. Although gCNR was initially demonstrated with ultrasound images, the metric is theoretically applicable to multiple types of medical images. In this paper, the applicability of gCNR to photoacoustic images is investigated. The gCNR was computed for both simulated and experimental photoacoustic images generated by amplitude-based (i.e., delay-and-sum) and coherence-based (i.e., short-lag spatial coherence) beamformers. These gCNR measurements were compared to three more traditional image quality metrics (i.e., contrast, contrast-to-noise ratio, and signal-to-noise ratio) applied to the same datasets. An increase in qualitative target visibility generally corresponded with increased gCNR. In addition, gCNR magnitude was more directly related to the separability of photoacoustic signals from their background, which degraded with the presence of limited bandwidth artifacts and increased levels of channel noise. At high gCNR values (i.e., 0.95-1), contrast, contrast-to-noise ratio, and signal-to-noise ratio varied by up to 23.7-56.2 dB, 2.0-3.4, and 26.5-7.6×1020, respectively, for simulated, experimental phantom, andin vivodata. Therefore, these traditional metrics can experience large variations when a target is fully detectable, and additional increases in these values would have no impact on photoacoustic target detectability. In addition, gCNR is robust to changes in traditional metrics introduced by applying a minimum threshold to image amplitudes. In tandem with other photoacoustic image quality metrics and with a defined range of 0 to 1, gCNR has promising potential to provide additional insight, particularly when designing new beamformers and image formation techniques and when reporting quantitative performance without an opportunity to qualitatively assess corresponding images (e.g., in text-only abstracts). 
    more » « less
  2. Zero-noise extrapolation (ZNE) is a widely used quantum error mitigation technique that artificially amplifies circuit noise and then extrapolates the results to the noise-free circuit. A common ZNE approach is Richardson extrapolation, which relies on polynomial interpolation. Despite its simplicity, efficient implementations of Richardson extrapolation face several challenges, including approximation errors from the non-polynomial behavior of noise channels, overfitting due to polynomial interpolation, and exponentially amplified measurement noise. This paper provides a comprehensive analysis of these challenges, presenting bias and variance bounds that quantify approximation errors. Additionally, for any precision ε , our results offer an estimate of the necessary sample complexity. We further extend the analysis to polynomial least squares-based extrapolation, which mitigates measurement noise and avoids overfitting. Finally, we propose a strategy for simultaneously mitigating circuit and algorithmic errors in the Trotter-Suzuki algorithm by jointly scaling the time step size and the noise level. This strategy provides a practical tool to enhance the reliability of near-term quantum computations. We support our theoretical findings with numerical experiments. 
    more » « less
  3. null (Ed.)
    Scientists use imaging to identify objects of interest and infer properties of these objects. The locations of these objects are often measured with error, which when ignored leads to biased parameter estimates and inflated variance. Current measurement error methods require an estimate or knowledge of the measurement error variance to correct these estimates, which may not be available. Instead, we create a spatial Bayesian hierarchical model that treats the locations as parameters, using the image itself to incorporate positional uncertainty. We lower the computational burden by approximating the likelihood using a noncontiguous block design around the object locations. We use this model to quantify the relationship between the intensity and displacement of hundreds of atom columns in crystal structures directly imaged via scanning transmission electron microscopy (STEM). Atomic displacements are related to important phenomena such as piezoelectricity, a property useful for engineering applications like ultrasound. Quantifying the sign and magnitude of this relationship will help materials scientists more precisely design materials with improved piezoelectricity. A simulation study confirms our method corrects bias in the estimate of the parameter of interest and drastically improves coverage in high noise scenarios compared to non-measurement error models. 
    more » « less
  4. This paper proposes a distributed estimation and control algorithm to allow a team of robots to search for and track an unknown number of targets. The number of targets in the area of interest varies over time as targets enter or leave, and there are many sources of sensing uncertainty, including false positive detections, false negative detections, and measurement noise. The robots use a novel distributed Multiple Hypothesis Tracker (MHT) to estimate both the number of targets and the states of each target. A key contribution is a new data association method that reallocates target tracks across the team. The distributed MHT is compared against another distributed multi-target tracker to test its utility for multi-robot, multi-target tracking. 
    more » « less
  5. We present EgoRenderer, a system for rendering full-body neural avatars of a person captured by a wearable, egocentric fisheye camera that is mounted on a cap or a VR headset. Our system renders photorealistic novel views of the actor and her motion from arbitrary virtual camera locations. Rendering full-body avatars from such egocentric images come with unique challenges due to the top-down view and large distortions. We tackle these challenges by decomposing the rendering process into several steps, including texture synthesis, pose construction, and neural image translation. For texture synthesis, we propose Ego-DPNet, a neural network that infers dense correspondences between the input fisheye images and an underlying parametric body model, and to extract textures from egocentric inputs. In addition, to encode dynamic appearances, our approach also learns an implicit texture stack that captures detailed appearance variation across poses and viewpoints. For correct pose generation, we first estimate body pose from the egocentric view using a parametric model. We then synthesize an external free-viewpoint pose image by projecting the parametric model to the user-specified target viewpoint. We next combine the target pose image and the textures into a combined feature image, which is transformed into the output color image using a neural image translation network. Experimental evaluations show that EgoRenderer is capable of generating realistic free-viewpoint avatars of a person wearing an egocentric camera. Comparisons to several baselines demonstrate the advantages of our approach. 
    more » « less