skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Edge-Informed Estimation of Gaussian Point Spread Functions in Convolutional Blurring Models
The underlying physics of imaging processes and associated instrumentation limitations mean that blurring artifacts are unavoidable in many applications such as astronomy, microscopy, radar and medical imaging. In several such imaging modalities, convolutional models are used to describe the blurring process; the observed image or function is a convolution of the true underlying image and a point spread function (PSF) which characterizes the blurring artifact. In this work, we propose and analyze a technique - based on convolutional edge detectors and Gaussian curve fitting - to approximate unknown Gaussian PSFs when the underlying true function is piecewise-smooth. For certain simple families of such functions, we show that this approximation is exponentially accurate. We also provide preliminary two dimensional extensions of this technique. These findings - confirmed by numerical simulations - demonstrate the feasibility of recovering accurate approximations to the blurring function, which serves as an important prerequisite to solving deblurring problems.  more » « less
Award ID(s):
2012238
PAR ID:
10538472
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-0864-8
Page Range / eLocation ID:
1 to 5
Subject(s) / Keyword(s):
convolutional blurring point spread function estimation Gaussian blur edge detection deconvolution
Format(s):
Medium: X
Location:
Boulder, CO, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract We study the ubiquitous super-resolution problem, in which one aims at localizing positive point sources in an image, blurred by the point spread function of the imaging device. To recover the point sources, we propose to solve a convex feasibility program, which simply finds a non-negative Borel measure that agrees with the observations collected by the imaging device. In the absence of imaging noise, we show that solving this convex program uniquely retrieves the point sources, provided that the imaging device collects enough observations. This result holds true if the point spread function of the imaging device can be decomposed into horizontal and vertical components and if the translations of these components form a Chebyshev system, i.e., a system of continuous functions that loosely behave like algebraic polynomials. Building upon the recent results for one-dimensional signals, we prove that this super-resolution algorithm is stable, in the generalized Wasserstein metric, to model mismatch (i.e., when the image is not sparse) and to additive imaging noise. In particular, the recovery error depends on the noise level and how well the image can be approximated with well-separated point sources. As an example, we verify these claims for the important case of a Gaussian point spread function. The proofs rely on the construction of novel interpolating polynomials—which are the main technical contribution of this paper—and partially resolve the question raised in Schiebinger et al. (2017, Inf. Inference, 7, 1–30) about the extension of the standard machinery to higher dimensions. 
    more » « less
  2. An efficient feature selection method can significantly boost results in classification problems. Despite ongoing improvement, hand-designed methods often fail to extract features capturing high- and mid-level representations at effective levels. In machine learning (Deep Learning), recent developments have improved upon these hand-designed methods by utilizing automatic extraction of features. Specifically, Convolutional Neural Networks (CNNs) are a highly successful technique for image classification which can automatically extract features, with ongoing learning and classification of these features. The purpose of this study is to detect hydraulic structures (i.e., bridges and culverts) that are important to overland flow modeling and environmental applications. The dataset used in this work is a relatively small dataset derived from 1-m LiDAR-derived Digital Elevation Models (DEMs) and National Agriculture Imagery Program (NAIP) aerial imagery. The classes for our experiment consist of two groups: the ones with a bridge/culvert being present are considered "True", and those without a bridge/culvert are considered "False". In this paper, we use advanced CNN techniques, including Siamese Neural Networks (SNNs), Capsule Networks (CapsNets), and Graph Convolutional Networks (GCNs), to classify samples with similar topographic and spectral characteristics, an objective which is challenging utilizing traditional machine learning techniques, such as Support Vector Machine (SVM), Gaussian Classifier (GC), and Gaussian Mixture Model (GMM). The advanced CNN-based approaches combined with data pre-processing techniques (e.g., data augmenting) produced superior results. These approaches provide efficient, cost-effective, and innovative solutions to the identification of hydraulic structures. 
    more » « less
  3. Convolutional neural networks (CNNs) are a foundational model architecture utilized to perform a wide variety of visual tasks. On image classification tasks CNNs achieve high performance, however model accuracy degrades quickly when inputs are perturbed by distortions such as additive noise or blurring. This drop in performance partly arises from incorrect detection of local features by convolutional layers. In this work, we develop a neuroscience-inspired unsupervised Sleep Replay Consolidation (SRC) algorithm for improving convolutional filter’s robustness to perturbations. We demonstrate that sleep- based optimization improves the quality of convolutional layers by the selective modification of spatial gradients across filters. We further show that, compared to other approaches such as fine- tuning, a single sleep phase improves robustness across different types of distortions in a data efficient manner. 
    more » « less
  4. Abstract We present a method for measuring small, discrete features near the resolution limit of X‐ray computed tomography (CT) data volumes with the aim of providing consistent answers across instruments and data resolutions. The appearances of small features are impacted by the partial volume effect and blurring due to the data point‐spread function, and we call our approach the partial‐volume and blurring (PVB) method. Features are segmented to encompass their total attenuation signal, which is then converted to a volume, in turn allowing a subset of voxels to be used to measure shape and orientation. We demonstrate the method on a set of gold grains, scanned with two instruments at a range of resolutions and with various surrounding media. We recover volume accurately over a factor of 27 range in grain volume and factor of 5 range in data resolution, successfully characterizing particles as small as 5.4 voxels in true volume. Shape metrics are affected variably by resolution effects and are more reliable when based on image‐based caliper measurements than perimeter length or surface area. Orientations are reproducible when maximum or minimum axis lengths are sufficiently different from the intermediate axis. Calibration requires end‐member CT numbers for the materials of interest, which we obtained empirically; we describe a first‐principles calculation and discuss its challenges. The PVB method is accurate, reproducible, resolution invariant, and objective, all important improvements over any method based on global thresholds. 
    more » « less
  5. Requiring less data for accurate models, few-shot learning has shown robustness and generality in many application domains. However, deploying few-shot models in untrusted environments may inflict privacy concerns, e.g., attacks or adversaries that may breach the privacy of user-supplied data. This paper studies the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud, by establishing a novel privacy-preserved embedding space that preserves the privacy of data and maintains the accuracy of the model. We examine the impact of various image privacy methods such as blurring, pixelization, Gaussian noise, and differentially private pixelization (DP-Pix) on few-shot image classification and propose a method that learns privacy-preserved representation through the joint loss. The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning. 
    more » « less