skip to main content

This content will become publicly available on February 18, 2023

Title: High-throughput single pixel spectral imaging system for glow discharge optical emission spectrometry elemental mapping enabled by compressed sensing
Glow discharge optical emission spectroscopy elemental mapping (GDOES EM), enabled by spectral imaging strategies, is an advantageous technique for direct multi-elemental analysis of solid samples in rapid timeframes. Here, a single-pixel, or point scan, spectral imaging system based on compressed sensing image sampling, is developed and optimized in terms of matrix density, compression factor, sparsifying basis, and reconstruction algorithm for coupling with GDOES EM. It is shown that a 512 matrix density at a compression factor of 30% provides the highest spatial fidelity in terms of the peak signal-to-noise ratio (PSNR) and complex wavelet structural similarity index measure (cw-SSIM) while maintaining fast measurement times. The background equivalent concentration (BEC) of Cu I at 510.5 nm is improved when implementing the discrete wavelet transform (DWT) sparsifying basis and Two-step Iterative Shrinking/Thresholding Algorithm for Linear Inverse Problems (TwIST) reconstruction algorithm. Utilizing these optimum conditions, a GDOES EM of a flexible, etched-copper circuit board was then successfully demonstrated with the compressed sensing single-pixel spectral imaging system (CSSPIS). The newly developed CSSPIS allows taking advantage of the significant cost-efficiency of point-scanning approaches (>10× vs. intensified array detector systems), while overcoming (up to several orders of magnitude) their inherent and substantial throughput limitations. Ultimately, it more » has the potential to be implemented on readily available commercial GDOES instruments by adapting the collection optics. « less
Authors:
; ; ; ;
Award ID(s):
2108359
Publication Date:
NSF-PAR ID:
10318845
Journal Name:
Journal of Analytical Atomic Spectrometry
ISSN:
0267-9477
Sponsoring Org:
National Science Foundation
More Like this
  1. Goda, Keisuke ; Tsia, Kevin K. (Ed.)
    We present a new deep compressed imaging modality by scanning a learned illumination pattern on the sample and detecting the signal with a single-pixel detector. This new imaging modality allows a compressed sampling of the object, and thus a high imaging speed. The object is reconstructed through a deep neural network inspired by compressed sensing algorithm. We optimize the illumination pattern and the image reconstruction network by training an end-to-end auto-encoder framework. Comparing with the conventional single-pixel camera and point-scanning imaging system, we accomplish a high-speed imaging with a reduced light dosage, while preserving a high imaging quality.
  2. Compressed sensing (CS) as a new data acquisition technique has been applied to monitor manufacturing processes. With a few measurements, sparse coefficient vectors can be recovered by solving an inverse problem and original signals can be reconstructed. Dictionary learning methods have been developed and applied in combination with CS to improve the sparsity level of the recovered coefficient vectors. In this work, a physics-constrained dictionary learning approach is proposed to solve both of reconstruction and classification problems by optimizing measurement, basis, and classification matrices simultaneously with the considerations of the application-specific restrictions. It is applied in image acquisitions in selective laser melting (SLM). The proposed approach includes the optimization in two steps. In the first stage, with the basis matrix fixed, the measurement matrix is optimized by determining the pixel locations for sampling in each image. The optimized measurement matrix only includes one non-zero entry in each row. The optimization of pixel locations is solved based on a constrained FrameSense algorithm. In the second stage, with the measurement matrix fixed, the basis and classification matrices are optimized based on the K-SVD algorithm. With the optimized basis matrix, the coefficient vector can be recovered with CS. The original signal can bemore »reconstructed by the linear combination of the basis matrix and the recovered coefficient vector. The original signal can also be classified to identify different machine states by the linear combination of the classification matrix and the coefficient vector.« less
  3. Abstract Implantable image sensors have the potential to revolutionize neuroscience. Due to their small form factor requirements; however, conventional filters and optics cannot be implemented. These limitations obstruct high-resolution imaging of large neural densities. Recent advances in angle-sensitive image sensors and single-photon avalanche diodes have provided a path toward ultrathin lens-less fluorescence imaging, enabling plenoptic sensing by extending sensing capabilities to include photon arrival time and incident angle, thereby providing the opportunity for separability of fluorescence point sources within the context of light-field microscopy (LFM). However, the addition of spectral sensitivity to angle-sensitive LFM reduces imager resolution because each wavelength requires a separate pixel subset. Here, we present a 1024-pixel, 50  µm thick implantable shank-based neural imager with color-filter-grating-based angle-sensitive pixels. This angular-spectral sensitive front end combines a metal–insulator–metal (MIM) Fabry–Perot color filter and diffractive optics to produce the measurement of orthogonal light-field information from two distinct colors within a single photodetector. The result is the ability to add independent color sensing to LFM while doubling the effective pixel density. The implantable imager combines angular-spectral and temporal information to demix and localize multispectral fluorescent targets. In this initial prototype, this is demonstrated with 45 μm diameter fluorescently labeled beads inmore »scattering medium. Fluorescent lifetime imaging is exploited to further aid source separation, in addition to detecting pH through lifetime changes in fluorescent dyes. While these initial fluorescent targets are considerably brighter than fluorescently labeled neurons, further improvements will allow the application of these techniques to in-vivo multifluorescent structural and functional neural imaging.« less
  4. Ocean colour is recognised as an Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS); and spectrally-resolved water-leaving radiances (or remote-sensing reflectances) in the visible domain, and chlorophyll-a concentration are identified as required ECV products. Time series of the products at the global scale and at high spatial resolution, derived from ocean-colour data, are key to studying the dynamics of phytoplankton at seasonal and inter-annual scales; their role in marine biogeochemistry; the global carbon cycle; the modulation of how phytoplankton distribute solar-induced heat in the upper layers of the ocean; and the response of the marine ecosystem to climate variability and change. However, generating a long time series of these products from ocean-colour data is not a trivial task: algorithms that are best suited for climate studies have to be selected from a number that are available for atmospheric correction of the satellite signal and for retrieval of chlorophyll-a concentration; since satellites have a finite life span, data from multiple sensors have to be merged to create a single time series, and any uncorrected inter-sensor biases could introduce artefacts in the series, e.g., different sensors monitor radiances at different wavebands such that producing a consistent time series ofmore »reflectances is not straightforward. Another requirement is that the products have to be validated against in situ observations. Furthermore, the uncertainties in the products have to be quantified, ideally on a pixel-by-pixel basis, to facilitate applications and interpretations that are consistent with the quality of the data. This paper outlines an approach that was adopted for generating an ocean-colour time series for climate studies, using data from the MERIS (MEdium spectral Resolution Imaging Spectrometer) sensor of the European Space Agency; the SeaWiFS (Sea-viewing Wide-Field-of-view Sensor) and MODIS-Aqua (Moderate-resolution Imaging Spectroradiometer-Aqua) sensors from the National Aeronautics and Space Administration (USA); and VIIRS (Visible and Infrared Imaging Radiometer Suite) from the National Oceanic and Atmospheric Administration (USA). The time series now covers the period from late 1997 to end of 2018. To ensure that the products meet, as well as possible, the requirements of the user community, marine-ecosystem modellers, and remote-sensing scientists were consulted at the outset on their immediate and longer-term requirements as well as on their expectations of ocean-colour data for use in climate research. Taking the user requirements into account, a series of objective criteria were established, against which available algorithms for processing ocean-colour data were evaluated and ranked. The algorithms that performed best with respect to the climate user requirements were selected to process data from the satellite sensors. Remote-sensing reflectance data from MODIS-Aqua, MERIS, and VIIRS were band-shifted to match the wavebands of SeaWiFS. Overlapping data were used to correct for mean biases between sensors at every pixel. The remote-sensing reflectance data derived from the sensors were merged, and the selected in-water algorithm was applied to the merged data to generate maps of chlorophyll concentration, inherent optical properties at SeaWiFS wavelengths, and the diffuse attenuation coefficient at 490 nm. The merged products were validated against in situ observations. The uncertainties established on the basis of comparisons with in situ data were combined with an optical classification of the remote-sensing reflectance data using a fuzzy-logic approach, and were used to generate uncertainties (root mean square difference and bias) for each product at each pixel.« less
  5. Abstract The goal of this study is to develop a new computed tomography (CT) image reconstruction method, aiming at improving the quality of the reconstructed images of existing methods while reducing computational costs. Existing CT reconstruction is modeled by pixel-based piecewise constant approximations of the integral equation that describes the CT projection data acquisition process. Using these approximations imposes a bottleneck model error and results in a discrete system of a large size. We propose to develop a content-adaptive unstructured grid (CAUG) based regularized CT reconstruction method to address these issues. Specifically, we design a CAUG of the image domain to sparsely represent the underlying image, and introduce a CAUG-based piecewise linear approximation of the integral equation by employing a collocation method. We further apply a regularization defined on the CAUG for the resulting ill-posed linear system, which may lead to a sparse linear representation for the underlying solution. The regularized CT reconstruction is formulated as a convex optimization problem, whose objective function consists of a weighted least square norm based fidelity term, a regularization term and a constraint term. Here, the corresponding weighted matrix is derived from the simultaneous algebraic reconstruction technique (SART). We then develop a SART-type preconditioned fixed-pointmore »proximity algorithm to solve the optimization problem. Convergence analysis is provided for the resulting iterative algorithm. Numerical experiments demonstrate the superiority of the proposed method over several existing methods in terms of both suppressing noise and reducing computational costs. These methods include the SART without regularization and with the quadratic regularization, the traditional total variation (TV) regularized reconstruction method and the TV superiorized conjugate gradient method on the pixel grid.« less