skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Computational imaging without a computer: seeing through random diffusers at the speed of light
Abstract Imaging through diffusers presents a challenging problem with various digital image reconstruction solutions demonstrated to date using computers. Here, we present a computer-free, all-optical image reconstruction method to see through random diffusers at the speed of light. Using deep learning, a set of transmissive diffractive surfaces are trained to all-optically reconstruct images of arbitrary objects that are completely covered by unknown, random phase diffusers. After the training stage, which is a one-time effort, the resulting diffractive surfaces are fabricated and form a passive optical network that is physically positioned between the unknown object and the image plane to all-optically reconstruct the object pattern through an unknown, new phase diffuser. We experimentally demonstrated this concept using coherent THz illumination and all-optically reconstructed objects distorted by unknown, random diffusers, never used during training. Unlike digital methods, all-optical diffractive reconstructions do not require power except for the illumination light. This diffractive solution to see through diffusers can be extended to other wavelengths, and might fuel various applications in biomedical imaging, astronomy, atmospheric sciences, oceanography, security, robotics, autonomous vehicles, among many others.  more » « less
Award ID(s):
2054102
PAR ID:
10362122
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
eLight
Volume:
2
Issue:
1
ISSN:
2662-8643
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Goda, Keisuke; Tsia, Kevin K. (Ed.)
    We present a new deep compressed imaging modality by scanning a learned illumination pattern on the sample and detecting the signal with a single-pixel detector. This new imaging modality allows a compressed sampling of the object, and thus a high imaging speed. The object is reconstructed through a deep neural network inspired by compressed sensing algorithm. We optimize the illumination pattern and the image reconstruction network by training an end-to-end auto-encoder framework. Comparing with the conventional single-pixel camera and point-scanning imaging system, we accomplish a high-speed imaging with a reduced light dosage, while preserving a high imaging quality. 
    more » « less
  2. Diffraction-limited optical imaging through scattering media has the potential to transform many applications such as airborne and space-based imaging (through the atmosphere), bioimaging (through skin and human tissue), and fiber-based imaging (through fiber bundles). Existing wavefront shaping methods can image through scattering media and other obscurants by optically correcting wavefront aberrations using high-resolution spatial light modulators—but these methods generally require (i) guidestars, (ii) controlled illumination, (iii) point scanning, and/or (iv) statics scenes and aberrations. We propose neural wavefront shaping (NeuWS), a scanning-free wavefront shaping technique that integrates maximum likelihood estimation, measurement modulation, and neural signal representations to reconstruct diffraction-limited images through strong static and dynamic scattering media without guidestars, sparse targets, controlled illumination, nor specialized image sensors. We experimentally demonstrate guidestar-free, wide field-of-view, high-resolution, diffraction-limited imaging of extended, nonsparse, and static/dynamic scenes captured through static/dynamic aberrations. 
    more » « less
  3. Photonics provides a promising approach for image processing by spatial filtering, with the advantage of faster speeds and lower power consumption compared to electronic digital solutions. However, traditional optical spatial filters suffer from bulky form factors that limit their portability. Here we present a new approach based on pixel arrays of plasmonic directional image sensors, designed to selectively detect light incident along a small, geometrically tunable set of directions. The resulting imaging systems can function as optical spatial filters without any external filtering elements, leading to extreme size miniaturization. Furthermore, they offer the distinct capability to perform multiple filtering operations at the same time, through the use of sensor arrays partitioned into blocks of adjacent pixels with different angular responses. To establish the image processing capabilities of these devices, we present a rigorous theoretical model of their filter transfer function under both coherent and incoherent illumination. Next, we use the measured angle-resolved responsivity of prototype devices to demonstrate two examples of relevant functionalities: (1) the visualization of otherwise invisible phase objects and (2) spatial differentiation with incoherent light. These results are significant for a multitude of imaging applications ranging from microscopy in biomedicine to object recognition for computer vision. 
    more » « less
  4. Optical diffraction tomography (ODT) is an indispensable tool for studying objects in three dimensions. Until now, ODT has been limited to coherent light because spatial phase information is required to solve the inverse scattering problem. We introduce a method that enables ODT to be applied to imaging incoherent contrast mechanisms such as fluorescent emission. Our strategy mimics the coherent scattering process with two spatially coherent illumination beams. The interferometric illumination pattern encodes spatial phase in temporal variations of the fluorescent emission, thereby allowing incoherent fluorescent emission to mimic the behavior of coherent illumination. The temporal variations permit recovery of the spatial distribution of fluorescent emission with an inverse scattering model. Simulations and experiments demonstrate isotropic resolution in the 3D reconstruction of a fluorescent object. 
    more » « less
  5. Abstract The visualization of pure phase objects by wavefront sensing has important applications ranging from surface profiling to biomedical microscopy, and generally requires bulky and complicated setups involving optical spatial filtering, interferometry, or structured illumination. Here we introduce a new type of image sensors that are uniquely sensitive to the local direction of light propagation, based on standard photodetectors coated with a specially designed plasmonic metasurface that creates an asymmetric dependence of responsivity on angle of incidence around the surface normal. The metasurface design, fabrication, and angle-sensitive operation are demonstrated using a simple photoconductive detector platform. The measurement results, combined with computational imaging calculations, are then used to show that a standard camera or microscope based on these metasurface pixels can directly visualize phase objects without any additional optical elements, with state-of-the-art minimum detectable phase contrasts below 10 mrad. Furthermore, the combination of sensors with equal and opposite angular response on the same pixel array can be used to perform quantitative phase imaging in a single shot, with a customized reconstruction algorithm which is also developed in this work. By virtue of its system miniaturization and measurement simplicity, the phase imaging approach enabled by these devices is particularly significant for applications involving space-constrained and portable setups (such as point-of-care imaging and endoscopy) and measurements involving freely moving objects. 
    more » « less