skip to main content


Title: Bijective-constrained cycle-consistent deep learning for optics-free imaging and classification

Many deep learning approaches to solve computational imaging problems have proven successful through relying solely on the data. However, when applied to the raw output of a bare (optics-free) image sensor, these methods fail to reconstruct target images that are structurally diverse. In this work we propose a self-consistent supervised model that learns not only the inverse, but also the forward model to better constrain the predictions through encouraging the network to model the ideal bijective imaging system. To do this, we employ cycle consistency alongside traditional reconstruction losses, both of which we show are needed for incoherent optics-free image reconstruction. By eliminating all optics, we demonstrate imaging with the thinnest camera possible.

 
more » « less
PAR ID:
10369483
Author(s) / Creator(s):
;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optica
Volume:
9
Issue:
1
ISSN:
2334-2536
Format(s):
Medium: X Size: Article No. 26
Size(s):
Article No. 26
Sponsoring Org:
National Science Foundation
More Like this
  1. We demonstrate optics-free imaging of complex color and monochrome QR-codes using a bare image sensor and trained artificial neural networks (ANNs). The ANN is trained to interpret the raw sensor data for human visualization. The image sensor is placed at a specified gap (1mm, 5mm and 10mm) from the QR code. We studied the robustness of our approach by experimentally testing the output of the ANNs with system perturbations of this gap, and the translational and rotational alignments of the QR code to the image sensor. Our demonstration opens us the possibility of using completely optics-free, non-anthropocentric cameras for application-specific imaging of complex, non-sparse objects.

     
    more » « less
  2. Abstract

    A theoretical framework is presented for temperature imaging from long‐wavelength infrared (LWIR) thermal radiation (e.g., 8–12 µm) through the end‐to‐end design of a metasurface‐optics frontend and a computational‐reconstruction backend. A new nonlinear reconstruction algorithm, “Planck regression”, is introduced to reconstruct the temperature map from a gray scale sensor image, even in the presence of severe chromatic aberration, by exploiting black body and optical physics particular to thermal imaging. This algorithm is combined with an end‐to‐end approach that optimizes manufacturable, single‐layer metasurfaces to yield the most accurate reconstruction. The designs demonstrate high‐quality, noise‐robust reconstructions of arbitrary temperature maps (including completely random images) in simulations of an ultra‐compact thermal‐imaging device. It is also shown that Planck regression is much more generalizable to arbitrary images than a straightforward neural‐network reconstruction, which requires a large training set of domain‐specific images.

     
    more » « less
  3. Abstract

    Imaging through diffusers presents a challenging problem with various digital image reconstruction solutions demonstrated to date using computers. Here, we present a computer-free, all-optical image reconstruction method to see through random diffusers at the speed of light. Using deep learning, a set of transmissive diffractive surfaces are trained to all-optically reconstruct images of arbitrary objects that are completely covered by unknown, random phase diffusers. After the training stage, which is a one-time effort, the resulting diffractive surfaces are fabricated and form a passive optical network that is physically positioned between the unknown object and the image plane to all-optically reconstruct the object pattern through an unknown, new phase diffuser. We experimentally demonstrated this concept using coherent THz illumination and all-optically reconstructed objects distorted by unknown, random diffusers, never used during training. Unlike digital methods, all-optical diffractive reconstructions do not require power except for the illumination light. This diffractive solution to see through diffusers can be extended to other wavelengths, and might fuel various applications in biomedical imaging, astronomy, atmospheric sciences, oceanography, security, robotics, autonomous vehicles, among many others.

     
    more » « less
  4. Diffraction-limited imaging in epi-fluorescence microscopy remains a challenge when sample aberrations are present or when the region of interest rests deep within an inhomogeneous medium. Adaptive optics is an attractive solution, albeit with limited field of view and requiring relatively complicated systems. Alternatively, reconstruction algorithms have been developed over the years to correct for aberrations. Unfortunately, purely post-processing techniques tend to be ill-posed and provide only incremental improvements in image quality. Here, we report a computational optical approach using unknown speckle illumination and a matched reconstruction algorithm to correct for aberrations and reach or surpass diffraction limited resolution. The data acquisition is performed by shifting an unknown speckle pattern with respect to a fluorescent object. A key advantage is that the speckle statistics are preserved upon propagation through the aberrations, which avoids the double pass of information through the aberrating medium typical of epi-fluorescence microscopy. The method recovers simultaneously a high-resolution image, the point spread function of the system that contains the aberrations, the speckle illumination pattern, and the shift positions.

     
    more » « less
  5. Abstract

    We present a deep learning framework based on a generative adversarial network (GAN) to perform super-resolution in coherent imaging systems. We demonstrate that this framework can enhance the resolution of both pixel size-limited and diffraction-limited coherent imaging systems. The capabilities of this approach are experimentally validated by super-resolving complex-valued images acquired using a lensfree on-chip holographic microscope, the resolution of which was pixel size-limited. Using the same GAN-based approach, we also improved the resolution of a lens-based holographic imaging system that was limited in resolution by the numerical aperture of its objective lens. This deep learning-based super-resolution framework can be broadly applied to enhance the space-bandwidth product of coherent imaging systems using image data and convolutional neural networks, and provides a rapid, non-iterative method for solving inverse image reconstruction or enhancement problems in optics.

     
    more » « less