skip to main content


This content will become publicly available on December 1, 2024

Title: First-Arrival Differential Counting for SPAD Array Design

We present a novel architecture for the design of single-photon detecting arrays that captures relative intensity or timing information from a scene, rather than absolute. The proposed method for capturing relative information between pixels or groups of pixels requires very little circuitry, and thus allows for a significantly higher pixel packing factor than is possible with per-pixel TDC approaches. The inherently compressive nature of the differential measurements also reduces data throughput and lends itself to physical implementations of compressed sensing, such as Haar wavelets. We demonstrate this technique for HDR imaging and LiDAR, and describe possible future applications.

 
more » « less
Award ID(s):
1730574
NSF-PAR ID:
10493063
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Sensors
Date Published:
Journal Name:
Sensors
Volume:
23
Issue:
23
ISSN:
1424-8220
Page Range / eLocation ID:
9445
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Fiber optic bundles are used in narrow-diameter medical and industrial instruments for acquiring images from confined locations. Images transmitted through these bundles contain only one pixel of information per fiber core and fail to capture information from the cladding region between cores. Both factors limit the spatial resolution attainable with fiber bundles. We show here that computational imaging (CI) can be combined with spectral coding to overcome these two fundamental limitations and improve spatial resolution in fiber bundle imaging. By acquiring multiple images of a scene with a high-resolution mask pattern imposed, up to 17 pixels of information can be recovered from each fiber core. A dispersive element at the distal end of the bundle imparts a wavelength-dependent lateral shift on light from the object. This enables light that would otherwise be lost at the inter-fiber cladding to be transmitted through adjacent fiber cores. We experimentally demonstrate this approach using synthetic and real objects. Using CI with spectral coding, object features 5× smaller than individual fiber cores were resolved, whereas conventional imaging could only resolve features at least 1.5× larger than each core. In summary, CI combined with spectral coding provides an approach for overcoming the two fundamental limitations of fiber optic bundle imaging.

     
    more » « less
  2. In recent years, deep neural networks have achieved state-of-the-art performance in a variety of recognition and segmentation tasks in medical imaging including brain tumor segmentation. We investigate that segmenting a brain tumor is facing to the imbalanced data problem where the number of pixels belonging to the background class (non tumor pixel) is much larger than the number of pixels belonging to the foreground class (tumor pixel). To address this problem, we propose a multitask network which is formed as a cascaded structure. Our model consists of two targets, i.e., (i) effectively differentiate the brain tumor regions and (ii) estimate the brain tumor mask. The first objective is performed by our proposed contextual brain tumor detection network, which plays a role of an attention gate and focuses on the region around brain tumor only while ignoring the far neighbor background which is less correlated to the tumor. Different from other existing object detection networks which process every pixel, our contextual brain tumor detection network only processes contextual regions around ground-truth instances and this strategy aims at producing meaningful regions proposals. The second objective is built upon a 3D atrous residual network and under an encode-decode network in order to effectively segment both large and small objects (brain tumor). Our 3D atrous residual network is designed with a skip connection to enables the gradient from the deep layers to be directly propagated to shallow layers, thus, features of different depths are preserved and used for refining each other. In order to incorporate larger contextual information from volume MRI data, our network utilizes the 3D atrous convolution with various kernel sizes, which enlarges the receptive field of filters. Our proposed network has been evaluated on various datasets including BRATS2015, BRATS2017 and BRATS2018 datasets with both validation set and testing set. Our performance has been benchmarked by both regionbased metrics and surface-based metrics. We also have conducted comparisons against state-of-the-art approaches 
    more » « less
  3. Abstract

    We present DELIGHT, or Deep Learning Identification of Galaxy Hosts of Transients, a new algorithm designed to automatically and in real time identify the host galaxies of extragalactic transients. The proposed algorithm receives as input compact, multiresolution images centered at the position of a transient candidate and outputs two-dimensional offset vectors that connect the transient with the center of its predicted host. The multiresolution input consists of a set of images with the same number of pixels, but with progressively larger pixel sizes and fields of view. A sample of 16,791 galaxies visually identified by the Automatic Learning for the Rapid Classification of Events broker team was used to train a convolutional neural network regression model. We show that this method is able to correctly identify both relatively large (10″ <r< 60″) and small (r≤ 10″) apparent size host galaxies using much less information (32 kB) than with a large, single-resolution image (920 kB). The proposed method has fewer catastrophic errors in recovering the position and is more complete and has less contamination (<0.86%) recovering the crossmatched redshift than other state-of-the-art methods. The more efficient representation provided by multiresolution input images could allow for the identification of transient host galaxies in real time, if adopted in alert streams from new generation of large -etendue telescopes such as the Vera C. Rubin Observatory.

     
    more » « less
  4. Abstract Implantable image sensors have the potential to revolutionize neuroscience. Due to their small form factor requirements; however, conventional filters and optics cannot be implemented. These limitations obstruct high-resolution imaging of large neural densities. Recent advances in angle-sensitive image sensors and single-photon avalanche diodes have provided a path toward ultrathin lens-less fluorescence imaging, enabling plenoptic sensing by extending sensing capabilities to include photon arrival time and incident angle, thereby providing the opportunity for separability of fluorescence point sources within the context of light-field microscopy (LFM). However, the addition of spectral sensitivity to angle-sensitive LFM reduces imager resolution because each wavelength requires a separate pixel subset. Here, we present a 1024-pixel, 50  µm thick implantable shank-based neural imager with color-filter-grating-based angle-sensitive pixels. This angular-spectral sensitive front end combines a metal–insulator–metal (MIM) Fabry–Perot color filter and diffractive optics to produce the measurement of orthogonal light-field information from two distinct colors within a single photodetector. The result is the ability to add independent color sensing to LFM while doubling the effective pixel density. The implantable imager combines angular-spectral and temporal information to demix and localize multispectral fluorescent targets. In this initial prototype, this is demonstrated with 45 μm diameter fluorescently labeled beads in scattering medium. Fluorescent lifetime imaging is exploited to further aid source separation, in addition to detecting pH through lifetime changes in fluorescent dyes. While these initial fluorescent targets are considerably brighter than fluorescently labeled neurons, further improvements will allow the application of these techniques to in-vivo multifluorescent structural and functional neural imaging. 
    more » « less
  5. Abstract. In this study, we developed a novel algorithm based on the collocatedModerate Resolution Imaging Spectroradiometer (MODIS) thermal infrared (TIR)observations and dust vertical profiles from the Cloud–Aerosol Lidar withOrthogonal Polarization (CALIOP) to simultaneously retrieve dust aerosoloptical depth at 10 µm (DAOD10 µm) and the coarse-mode dusteffective diameter (Deff) over global oceans. The accuracy of theDeff retrieval is assessed by comparing the dust lognormal volumeparticle size distribution (PSD) corresponding to retrieved Deff withthe in situ-measured dust PSDs from the AERosol Properties – Dust(AER-D), Saharan Mineral Dust Experiment (SAMUM-2), and Saharan Aerosol Long-Range Transport and Aerosol–Cloud-InteractionExperiment (SALTRACE) fieldcampaigns through case studies. The new DAOD10 µm retrievals wereevaluated first through comparisons with the collocated DAOD10.6 µmretrieved from the combined Imaging Infrared Radiometer (IIR) and CALIOPobservations from our previous study (Zheng et al., 2022). The pixel-to-pixelcomparison of the two DAOD retrievals indicates a good agreement(R∼0.7) and a significant reduction in (∼50 %) retrieval uncertainties largely thanks to the better constraint ondust size. In a climatological comparison, the seasonal and regional(2∘×5∘) mean DAOD10 µm retrievals basedon our combined MODIS and CALIOP method are in good agreement with the twoindependent Infrared Atmospheric Sounding Interferometer (IASI) productsover three dust transport regions (i.e., North Atlantic (NA; R=0.9),Indian Ocean (IO; R=0.8) and North Pacific (NP; R=0.7)). Using the new retrievals from 2013 to 2017, we performed a climatologicalanalysis of coarse-mode dust Deff over global oceans. We found thatdust Deff over IO and NP is up to 20 % smaller than that over NA.Over NA in summer, we found a ∼50 % reduction in the numberof retrievals with Deff>5 µm from 15 to35∘ W and a stable trend of Deff average at 4.4 µm from35∘ W throughout the Caribbean Sea (90∘ W). Over NP inspring, only ∼5 % of retrieved pixels with Deff>5 µm are found from 150 to 180∘ E, whilethe mean Deff remains stable at 4.0 µm throughout eastern NP. To the best of our knowledge, this study is the first to retrieve both DAOD andcoarse-mode dust particle size over global oceans for multiple years. Thisretrieval dataset provides insightful information for evaluating dustlongwave radiative effects and coarse-mode dust particle size in models.

     
    more » « less