skip to main content


Search for: All records

Creators/Authors contains: "Nelson, Soren"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Many deep learning approaches to solve computational imaging problems have proven successful through relying solely on the data. However, when applied to the raw output of a bare (optics-free) image sensor, these methods fail to reconstruct target images that are structurally diverse. In this work we propose a self-consistent supervised model that learns not only the inverse, but also the forward model to better constrain the predictions through encouraging the network to model the ideal bijective imaging system. To do this, we employ cycle consistency alongside traditional reconstruction losses, both of which we show are needed for incoherent optics-free image reconstruction. By eliminating all optics, we demonstrate imaging with the thinnest camera possible.

     
    more » « less
  2. Deep-brain microscopy is strongly limited by the size of the imaging probe, both in terms of achievable resolution and potential trauma due to surgery. Here, we show that a segment of an ultra-thin multi-mode fiber (cannula) can replace the bulky microscope objective inside the brain. By creating a self-consistent deep neural network that is trained to reconstruct anthropocentric images from the raw signal transported by the cannula, we demonstrate a single-cell resolution (< 10μm), depth sectioning resolution of 40 μm, and field of view of 200 μm, all with green-fluorescent-protein labelled neurons imaged at depths as large as 1.4 mm from the brain surface. Since ground-truth images at these depths are challenging to obtain in vivo, we propose a novel ensemble method that averages the reconstructed images from disparate deep-neural-network architectures. Finally, we demonstrate dynamic imaging of moving GCaMp-labelledC.elegansworms. Our approach dramatically simplifies deep-brain microscopy.

     
    more » « less
  3. We experimentally demonstrate a camera whose primary optic is a cannula/needle (diameter=0.22mmandlength=12.5mm) that acts as a light pipe transporting light intensity from an object plane (35 cm away) to its opposite end. Deep neural networks (DNNs) are used to reconstruct color and grayscale images with a field of view of 18° and angular resolution of∼<#comment/>0.4∘<#comment/>. We showed a large effective demagnification of127×<#comment/>. Most interestingly, we showed that such a camera could achieve close to diffraction-limited performance with an effective numerical aperture of 0.045, depth of focus∼<#comment/>16µ<#comment/>m, and resolution close to the sensor pixel size (3.2 µm). When trained on images with depth information, the DNN can create depth maps. Finally, we show DNN-based classification of the EMNIST dataset before and after image reconstructions. The former could be useful for imaging with enhanced privacy.

     
    more » « less
  4. We demonstrate optics-free imaging of complex color and monochrome QR-codes using a bare image sensor and trained artificial neural networks (ANNs). The ANN is trained to interpret the raw sensor data for human visualization. The image sensor is placed at a specified gap (1mm, 5mm and 10mm) from the QR code. We studied the robustness of our approach by experimentally testing the output of the ANNs with system perturbations of this gap, and the translational and rotational alignments of the QR code to the image sensor. Our demonstration opens us the possibility of using completely optics-free, non-anthropocentric cameras for application-specific imaging of complex, non-sparse objects.

     
    more » « less