Flat lenses with focal length tunability can enable the development of highly integrated imaging systems. This work explores machine learning to inverse design a multifocal multilevel diffractive lens (MMDL) by wavelength multiplexing. The MMDL output is multiplexed in three color channels, red (650 nm), green (550 nm), and blue (450 nm), to achieve varied focal lengths of 4 mm, 20 mm, and 40 mm at these three color channels, respectively. The focal lengths of the MMDL scale significantly with the wavelength in contrast to conventional diffractive lenses. The MMDL consists of concentric rings with equal widths and varied heights. The machine learning method is utilized to optimize the height of each concentric ring to obtain the desired phase distribution so as to achieve varied focal lengths multiplexed by wavelengths. The designed MMDL is fabricated through a direct-write laser lithography system with gray-scale exposure. The demonstrated singlet lens is miniature and polarization insensitive, and thus can potentially be applied in integrated optical imaging systems to achieve zooming functions.
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
-
By engineering the point-spread function (PSF) of single molecules, different fluorophore species can be imaged simultaneously and distinguished by their unique PSF patterns. Here, we insert a silicon-dioxide phase plate at the Fourier plane of the detection path of a wide-field fluorescence microscope to produce distinguishable PSFs (X-PSFs) at different wavelengths. We demonstrate that the resulting PSFs can be localized spatially and spectrally using a maximum-likelihood estimation algorithm and can be utilized for hyper-spectral super-resolution microscopy of biological samples. We produced superresolution images of fixed U2OS cells using X-PSFs for dSTORM imaging with simultaneous illumination of up to three fluorophore species. The species were distinguished only by the PSF pattern. We achieved ∼21-nm lateral localization precision (FWHM) and ∼17-nm axial precision (FWHM) with an average of 1,800 - 3,500 photons per PSF and a background as high as 130 - 400 photons per pixel. The modified PSF distinguished fluorescent probes with ∼80 nm separation between spectral peaks.
-
Subramania, Ganapathi S. ; Foteinopoulou, Stavroula (Ed.)
-
Many deep learning approaches to solve computational imaging problems have proven successful through relying solely on the data. However, when applied to the raw output of a bare (optics-free) image sensor, these methods fail to reconstruct target images that are structurally diverse. In this work we propose a self-consistent supervised model that learns not only the inverse, but also the forward model to better constrain the predictions through encouraging the network to model the ideal bijective imaging system. To do this, we employ cycle consistency alongside traditional reconstruction losses, both of which we show are needed for incoherent optics-free image reconstruction. By eliminating all optics, we demonstrate imaging with the thinnest camera possible.
-
In this paper, we discuss flat programmable multi-level diffractive lenses (PMDL) enabled by phase change materials working in the near-infrared and visible ranges. The high real part refractive index contrast (Δn ∼ 0.6) of Sb2S3between amorphous and crystalline states, and extremely low losses in the near-infrared, enable the PMDL to effectively shift the lens focus when the phase of the material is altered between its crystalline and amorphous states. In the visible band, although losses can become significant as the wavelength is reduced, the lenses can still provide good performance as a result of their relatively small thickness (∼ 1.5λ to 3λ). The PMDL consists of Sb2S3concentric rings with equal width and varying heights embedded in a glass substrate. The height of each concentric ring was optimized by a modified direct binary search algorithm. The proposed designs show the possibility of realizing programmable lenses at design wavelengths from the near-infrared (850 nm) up to the blue (450 nm) through engineering PMDLs with Sb2S3. Operation at these short wavelengths, to the best of our knowledge, has not been studied so far in reconfigurable lenses with phase-change materials. Therefore, our results open a wider range of applications for phase-change materials, and show the prospect of Sb2S3for such applications. The proposed lenses are polarization insensitive and can have the potential to be applied in dual-functionality devices, optical imaging, and biomedical science.
-
We utilized inverse design to engineer the point-spread function (PSF) of a low-f-number, freeform diffractive microlens in an array, so as to enable extended depth of focus (DOF). Each square microlens of side 69 µm and focal length 40 µm (in a polymer film, n∼1.47) generated a square PSF of side ∼10 µm that was achromatic over the visible band (450 to 750 nm), and also exhibited an extended DOF of ∼ ± 2 µm. The microlens has a geometric f/# (focal length divided by aperture size) of 0.58 in the polymer material (0.39 in air). Since each microlens is a square, the microlens array (MLA) can achieve 100% fill factor. By placing this microlens array (MLA) directly on a high-resolution print, we demonstrated integral imaging with applications in physical security. The extended DOF preserves the optical effects even with expected film-thickness variations, thereby increasing robustness in practical applications. Since these multi-level diffractive MLAs are fabricated using UV-nanoimprint lithography, they have the potential for low-cost large volume manufacturing.
-
In this work, we explore inverse designed reconfigurable digital metamaterial structures based on phase change material Sb2Se3for efficient and compact integrated nanophotonics. An exemplary design of a 1 × 2 optical switch consisting of a 3 µm x 3 µm pixelated domain is demonstrated. We show that: (i) direct optimization of a domain containing only Si and Sb2Se3pixels does not lead to a high extinction ratio between output ports in the amorphous state, which is owed to the small index contrast between Si and Sb2Se3in such a state. As a result, (ii) topology optimization, e.g., the addition of air pixels, is required to provide an initial asymmetry that aids the amorphous state's response. Furthermore, (iii) the combination of low loss and high refractive index change in Sb2Se3, which is unique among all phase change materials in the telecommunications 1550 nm band, translates into an excellent projected performance; the optimized device structure exhibits a low insertion loss (∼1.5 dB) and high extinction ratio (>18 dB) for both phase states.
-
Deep-brain microscopy is strongly limited by the size of the imaging probe, both in terms of achievable resolution and potential trauma due to surgery. Here, we show that a segment of an ultra-thin multi-mode fiber (cannula) can replace the bulky microscope objective inside the brain. By creating a self-consistent deep neural network that is trained to reconstruct anthropocentric images from the raw signal transported by the cannula, we demonstrate a single-cell resolution (< 10μm), depth sectioning resolution of 40 μm, and field of view of 200 μm, all with green-fluorescent-protein labelled neurons imaged at depths as large as 1.4 mm from the brain surface. Since ground-truth images at these depths are challenging to obtain in vivo, we propose a novel ensemble method that averages the reconstructed images from disparate deep-neural-network architectures. Finally, we demonstrate dynamic imaging of moving GCaMp-labelled
C .elegans worms. Our approach dramatically simplifies deep-brain microscopy. -
We experimentally demonstrate a camera whose primary optic is a cannula/needle (
and ) that acts as a light pipe transporting light intensity from an object plane (35 cm away) to its opposite end. Deep neural networks (DNNs) are used to reconstruct color and grayscale images with a field of view of 18° and angular resolution of . We showed a large effective demagnification of . Most interestingly, we showed that such a camera could achieve close to diffraction-limited performance with an effective numerical aperture of 0.045, depth of focus , and resolution close to the sensor pixel size (3.2 µm). When trained on images with depth information, the DNN can create depth maps. Finally, we show DNN-based classification of the EMNIST dataset before and after image reconstructions. The former could be useful for imaging with enhanced privacy.