skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Superresolution Reconstruction of Severely Undersampled Point-spread Functions Using Point-source Stacking and Deconvolution
Point-spread function (PSF) estimation in spatially undersampled images is challenging because large pixels average fine-scale spatial information. This is problematic when fine-resolution details are necessary, as in optimal photometry where knowledge of the illumination pattern beyond the native spatial resolution of the image may be required. Here, we introduce a method of PSF reconstruction where point sources are artificially sampled beyond the native resolution of an image and combined together via stacking to return a finely sampled estimate of the PSF. This estimate is then deconvolved from the pixel-gridding function to return a superresolution kernel that can be used for optimally weighted photometry. We benchmark against the <1% photometric error requirement of the upcoming SPHEREx mission to assess performance in a concrete example. We find that standard methods like Richardson–Lucy deconvolution are not sufficient to achieve this stringent requirement. We investigate a more advanced method with significant heritage in image analysis called iterative back-projection (IBP) and demonstrate it using idealized Gaussian cases and simulated SPHEREx images. In testing this method on real images recorded by the LORRI instrument on New Horizons, we are able to identify systematic pointing drift. Our IBP-derived PSF kernels allow photometric accuracy significantly better than the requirement in individual SPHEREx exposures. This PSF reconstruction method is broadly applicable to a variety of problems and combines computationally simple techniques in a way that is robust to complicating factors such as severe undersampling, spatially complex PSFs, noise, crowded fields, or limited source numbers.  more » « less
Award ID(s):
1659740
PAR ID:
10227753
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
The Astrophysical journal
Volume:
252
Issue:
2
ISSN:
1538-4365
Page Range / eLocation ID:
24
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract By fitting observed data with predicted seismograms, least‐squares migration (LSM) computes a generalized inverse for a subsurface reflectivity model, which can improve image resolution and reduce artifacts caused by incomplete acquisition. However, the large computational cost of LSM required for simulations and migrations limits its wide applications for large‐scale imaging problems. Using point‐spread function (PSF) deconvolution, we present an efficient and stable high‐resolution imaging method. The PSFs are first computed on a coarse grid using local ray‐based Gaussian beam Born modeling and migration. Then, we interpolate the PSFs onto a fine‐image grid and apply a high‐dimensional Gaussian function to attenuate artifacts far away from the PSF centers. With 2D/3D partition of unity, we decompose the traditional adjoint migration results into local images with the same window size as the PSFs. Then, these local images are deconvolved by the PSFs in the wavenumber domain to reduce the effects of the band‐limited source function and compensate for irregular subsurface illumination. The final assembled image is obtained by applying the inverse of the partitions for the deconvolved local images. Numerical examples for both synthetic and field data demonstrate that the proposed PSF deconvolution can significantly improve image resolution and amplitudes for deep structures, while not being sensitive to velocity errors as the data‐domain LSM. 
    more » « less
  2. Abstract We present an algorithm to derive difference images for data taken with JWST with matched point-spread functions (PSFs). It is based on the saccadic fast Fourier transform method but with revisions to accommodate the rotations and spatial variations of the PSFs. It allows for spatially varying kernels in B-spline form with separately controlled photometric scaling and Tikhonov kernel regularization for harnessing the ultimate fitting flexibility. We present this method using the JWST/NIRCam images of galaxy cluster Abell 2744 acquired in JWST Cycle 1 as the test data. The algorithm can be useful for time-domain source detection and differential photometry with JWST. It can also coadd images of multiple exposures taken at different field orientations. The coadded images preserve the sharpness of the central cores of the PSFs, and the positions and shapes of the objects are matched precisely with B-splines across the field. 
    more » « less
  3. null (Ed.)
    Lensless imaging is a new, emerging modality where image sensors utilize optical elements in front of the sensor to perform multiplexed imaging. There have been several recent papers to reconstruct images from lensless imagers, including methods that utilize deep learning for state-of-the-art performance. However, many of these methods require explicit knowledge of the optical element, such as the point spread function, or learn the reconstruction mapping for a single fixed PSF. In this paper, we explore a neural network architecture that performs joint image reconstruction and PSF estimation to robustly recover images captured with multiple PSFs from different cameras. Using adversarial learning, this approach achieves improved reconstruction results that do not require explicit knowledge of the PSF at test-time and shows an added improvement in the reconstruction model’s ability to generalize to variations in the camera’s PSF. This allows lensless cameras to be utilized in a wider range of applications that require multiple cameras without the need to explicitly train a separate model for each new camera. 
    more » « less
  4. The statistical region merging (SRM) method for image segmentation is based on some solid probabilistic and statistical principles. It produces good segmentation results, and is efficient in term of the computational time. The original SRM algorithm is for Cartesian images sampled by square lattices (sqL). Because hexagonal lattices (hexL) have the advantage that each lattice point in a hexL has six equidistant adjacent lattice points, in this paper, we perform image segmentation for hexagonally sampled images using SRM. We first convert the SRM algorithm from sqLs to hexLs. Then we use some test images to compare the corresponding segmentation effect for hexLs versus sqLs. The experimental results have shown that a hexL exhibits evidently better image segmentation effect than the corresponding sqL (with the same spatial sampling rate as the hexL) using the usual 4-connectivity. Finally, we point out that CT image segmentation may benefit from using hexLs since they provide better image reconstruction effect than sqLs. 
    more » « less
  5. Combining a hyperspectral (HS) image and a multi-spectral (MS) image---an example of image fusion---can result in a spatially and spectrally high-resolution image. Despite the plethora of fusion algorithms in remote sensing, a necessary prerequisite, namely registration, is mostly ignored. This limits their application to well-registered images from the same source. In this article, we propose and validate an integrated registration and fusion approach (code available at https://github.com/zhouyuanzxcv/Hyperspectral). The registration algorithm minimizes a least-squares (LSQ) objective function with the point spread function (PSF) incorporated together with a nonrigid freeform transformation applied to the HS image and a rigid transformation applied to the MS image. It can handle images with significant scale differences and spatial distortion. The fusion algorithm takes the full high-resolution HS image as an unknown in the objective function. Assuming that the pixels lie on a low-dimensional manifold invariant to local linear transformations from spectral degradation, the fusion optimization problem leads to a closed-form solution. The method was validated on the Pavia University, Salton Sea, and the Mississippi Gulfport datasets. When the proposed registration algorithm is compared to its rigid variant and two mutual information-based methods, it has the best accuracy for both the nonrigid simulated dataset and the real dataset, with an average error less than 0.15 pixels for nonrigid distortion of maximum 1 HS pixel. When the fusion algorithm is compared with current state-of-the-art algorithms, it has the best performance on images with registration errors as well as on simulations that do not consider registration effects. 
    more » « less