Abstract By fitting observed data with predicted seismograms, least‐squares migration (LSM) computes a generalized inverse for a subsurface reflectivity model, which can improve image resolution and reduce artifacts caused by incomplete acquisition. However, the large computational cost of LSM required for simulations and migrations limits its wide applications for large‐scale imaging problems. Using point‐spread function (PSF) deconvolution, we present an efficient and stable high‐resolution imaging method. The PSFs are first computed on a coarse grid using local ray‐based Gaussian beam Born modeling and migration. Then, we interpolate the PSFs onto a fine‐image grid and apply a high‐dimensional Gaussian function to attenuate artifacts far away from the PSF centers. With 2D/3D partition of unity, we decompose the traditional adjoint migration results into local images with the same window size as the PSFs. Then, these local images are deconvolved by the PSFs in the wavenumber domain to reduce the effects of the band‐limited source function and compensate for irregular subsurface illumination. The final assembled image is obtained by applying the inverse of the partitions for the deconvolved local images. Numerical examples for both synthetic and field data demonstrate that the proposed PSF deconvolution can significantly improve image resolution and amplitudes for deep structures, while not being sensitive to velocity errors as the data‐domain LSM.
more »
« less
Superresolution Reconstruction of Severely Undersampled Point-spread Functions Using Point-source Stacking and Deconvolution
Point-spread function (PSF) estimation in spatially undersampled images is challenging because large pixels average fine-scale spatial information. This is problematic when fine-resolution details are necessary, as in optimal photometry where knowledge of the illumination pattern beyond the native spatial resolution of the image may be required. Here, we introduce a method of PSF reconstruction where point sources are artificially sampled beyond the native resolution of an image and combined together via stacking to return a finely sampled estimate of the PSF. This estimate is then deconvolved from the pixel-gridding function to return a superresolution kernel that can be used for optimally weighted photometry. We benchmark against the <1% photometric error requirement of the upcoming SPHEREx mission to assess performance in a concrete example. We find that standard methods like Richardson–Lucy deconvolution are not sufficient to achieve this stringent requirement. We investigate a more advanced method with significant heritage in image analysis called iterative back-projection (IBP) and demonstrate it using idealized Gaussian cases and simulated SPHEREx images. In testing this method on real images recorded by the LORRI instrument on New Horizons, we are able to identify systematic pointing drift. Our IBP-derived PSF kernels allow photometric accuracy significantly better than the requirement in individual SPHEREx exposures. This PSF reconstruction method is broadly applicable to a variety of problems and combines computationally simple techniques in a way that is robust to complicating factors such as severe undersampling, spatially complex PSFs, noise, crowded fields, or limited source numbers.
more »
« less
- Award ID(s):
- 1659740
- PAR ID:
- 10227753
- Date Published:
- Journal Name:
- The Astrophysical journal
- Volume:
- 252
- Issue:
- 2
- ISSN:
- 1538-4365
- Page Range / eLocation ID:
- 24
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract We present an algorithm to derive difference images for data taken with JWST with matched point-spread functions (PSFs). It is based on the saccadic fast Fourier transform method but with revisions to accommodate the rotations and spatial variations of the PSFs. It allows for spatially varying kernels in B-spline form with separately controlled photometric scaling and Tikhonov kernel regularization for harnessing the ultimate fitting flexibility. We present this method using the JWST/NIRCam images of galaxy cluster Abell 2744 acquired in JWST Cycle 1 as the test data. The algorithm can be useful for time-domain source detection and differential photometry with JWST. It can also coadd images of multiple exposures taken at different field orientations. The coadded images preserve the sharpness of the central cores of the PSFs, and the positions and shapes of the objects are matched precisely with B-splines across the field.more » « less
-
null (Ed.)Lensless imaging is a new, emerging modality where image sensors utilize optical elements in front of the sensor to perform multiplexed imaging. There have been several recent papers to reconstruct images from lensless imagers, including methods that utilize deep learning for state-of-the-art performance. However, many of these methods require explicit knowledge of the optical element, such as the point spread function, or learn the reconstruction mapping for a single fixed PSF. In this paper, we explore a neural network architecture that performs joint image reconstruction and PSF estimation to robustly recover images captured with multiple PSFs from different cameras. Using adversarial learning, this approach achieves improved reconstruction results that do not require explicit knowledge of the PSF at test-time and shows an added improvement in the reconstruction model’s ability to generalize to variations in the camera’s PSF. This allows lensless cameras to be utilized in a wider range of applications that require multiple cameras without the need to explicitly train a separate model for each new camera.more » « less
-
The statistical region merging (SRM) method for image segmentation is based on some solid probabilistic and statistical principles. It produces good segmentation results, and is efficient in term of the computational time. The original SRM algorithm is for Cartesian images sampled by square lattices (sqL). Because hexagonal lattices (hexL) have the advantage that each lattice point in a hexL has six equidistant adjacent lattice points, in this paper, we perform image segmentation for hexagonally sampled images using SRM. We first convert the SRM algorithm from sqLs to hexLs. Then we use some test images to compare the corresponding segmentation effect for hexLs versus sqLs. The experimental results have shown that a hexL exhibits evidently better image segmentation effect than the corresponding sqL (with the same spatial sampling rate as the hexL) using the usual 4-connectivity. Finally, we point out that CT image segmentation may benefit from using hexLs since they provide better image reconstruction effect than sqLs.more » « less
-
Abstract Image subtraction is essential for transient detection in time-domain astronomy. The point-spread function (PSF), photometric scaling, and sky background generally vary with time and across the field of view for imaging data taken with ground-based optical telescopes. Image subtraction algorithms need to match these variations for the detection of flux variability. An algorithm that can be fully parallelized is highly desirable for future time-domain surveys. Here we introduce the saccadic fast Fourier transform (SFFT) algorithm we developed for image differencing. SFFT uses aδ-function basis for kernel decomposition, and the image subtraction is performed in Fourier space. This brings about a remarkable improvement in computational performance of about an order of magnitude compared to other published image subtraction codes. SFFT can accommodate the spatial variations in wide-field imaging data, including PSF, photometric scaling, and sky background. However, the flexibility of theδ-function basis may also make it more prone to overfitting. The algorithm has been tested extensively on real astronomical data taken by a variety of telescopes. Moreover, the SFFT code allows for the spatial variations of the PSF and sky background to be fitted by spline functions.more » « less
An official website of the United States government

