Rapid advancements in autonomous systems and the Internet of Things have necessitated the development of compact and low-power image sensors to bridge the gap between the digital and physical world. To that end, sub-wavelength diffractive optics, commonly known as meta-optics, have garnered significant interest from the optics and photonics community due to their ability to achieve multiple functionalities within a small form factor. Despite years of research, however, the performance of meta-optics has often remained inferior compared to that of traditional refractive optics. In parallel, computational imaging techniques have emerged as a promising path to miniaturize optical systems, albeit often at the expense of higher power and latency. The lack of desired performance from either meta-optical or computational solutions has motivated researchers to look into a jointly optimized meta-optical–digital solution. While the meta-optical front end can preprocess the scene to reduce the computational load on the digital back end, the computational back end can in turn relax requirements on the meta-optics. In this Perspective, we provide an overview of this up-and-coming field, termed here as “software-defined meta-optics.” We highlight recent contributions that have advanced the current state of the art and point out directions toward which future research efforts should be directed to leverage the full potential of subwavelength photonic platforms in imaging and sensing applications. Synergistic technology transfer and commercialization of meta-optic technologies will pave the way for highly efficient, compact, and low-power imaging systems of the future.
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
-
Abstract Endoscopes are an important component for the development of minimally invasive surgeries. Their size is one of the most critical aspects, because smaller and less rigid endoscopes enable higher agility, facilitate larger accessibility, and induce less stress on the surrounding tissue. In all existing endoscopes, the size of the optics poses a major limitation in miniaturization of the imaging system. Not only is making small optics difficult, but their performance also degrades with downscaling. Meta-optics have recently emerged as a promising candidate to drastically miniaturize optics while achieving similar functionalities with significantly reduced size. Herein, we report an inverse-designed meta-optic, which combined with a coherent fiber bundle enables a 33% reduction in the rigid tip length over traditional gradient-index (GRIN) lenses. We use the meta-optic fiber endoscope (MOFIE) to demonstrate real-time video capture in full visible color, the spatial resolution of which is primarily limited by the fiber itself. Our work shows the potential of meta-optics for integration and miniaturization of biomedical devices towards minimally invasive surgery.
-
Abstract Miniature varifocal lenses are crucial for many applications requiring compact optical systems. Here, utilizing electro-mechanically actuated 0.5-mm aperture infrared Alvarez meta-optics, we demonstrate 3.1 mm (200 diopters) focal length tuning with an actuation voltage below 40 V. This constitutes the largest focal length tuning in any low-power electro-mechanically actuated meta-optic, enabled by the high energy density in comb-drive actuators producing large displacements at relatively low voltage. The demonstrated device is produced by a novel nanofabrication process that accommodates meta-optics with a larger aperture and has improved alignment between meta-optics via flip-chip bonding. The whole fabrication process is CMOS compatible and amenable to high-throughput manufacturing.more » « less
-
In recent years, convolutional neural networks (CNNs) have enabled ubiquitous image processing applications. As such, CNNs require fast forward propagation runtime to process high-resolution visual streams in real time. This is still a challenging task even with state-of-the-art graphics and tensor processing units. The bottleneck in computational efficiency primarily occurs in the convolutional layers. Performing convolutions in the Fourier domain is a promising way to accelerate forward propagation since it transforms convolutions into elementwise multiplications, which are considerably faster to compute for large kernels. Furthermore, such computation could be implemented using an optical
system with orders of magnitude faster operation. However, a major challenge in using this spectral approach, as well as in an optical implementation of CNNs, is the inclusion of a nonlinearity between each convolutional layer, without which CNN performance drops dramatically. Here, we propose a spectral CNN linear counterpart (SCLC) network architecture and its optical implementation. We propose a hybrid platform with an optical front end to perform a large number of linear operations, followed by an electronic back end. The key contribution is to develop a knowledge distillation (KD) approach to circumvent the need for nonlinear layers between the convolutional layers and successfully train such networks. While the KD approach is known in machine learning as an effective process for network pruning, we adapt the approach to transfer the knowledge from a nonlinear network ( teacher ) to a linear counterpart (student ), where we can exploit the inherent parallelism of light. We show that the KD approach can achieve performance that easily surpasses the standard linear version of a CNN and could approach the performance of the nonlinear network. Our simulations show that the possibility of increasing the resolution of the input image allows our proposedoptical linear network to perform more efficiently than a nonlinear network with the same accuracy on two fundamental image processing tasks: (i) object classification and (ii) semantic segmentation. -
Abstract We report an inverse-designed, high numerical aperture (∼0.44), extended depth of focus (EDOF) meta-optic, which exhibits a lens-like point spread function (PSF). The EDOF meta-optic maintains a focusing efficiency comparable to that of a hyperboloid metalens throughout its depth of focus. Exploiting the extended depth of focus and computational post processing, we demonstrate broadband imaging across the full visible spectrum using a 1 mm, f/1 meta-optic. Unlike other canonical EDOF meta-optics, characterized by phase masks such as a log-asphere or cubic function, our design exhibits a highly invariant PSF across ∼290 nm optical bandwidth, which leads to significantly improved image quality, as quantified by structural similarity metrics.more » « less
-
Extended depth of focus (EDOF) optics can enable lower complexity optical imaging systems when compared to active focusing solutions. With existing EDOF optics, however, it is difficult to achieve high resolution and high collection efficiency simultaneously. The subwavelength spacing of scatterers in a meta-optic enables the engineering of very steep phase gradients; thus, meta-optics can achieve both a large physical aperture and a high numerical aperture. Here, we demonstrate a fast
EDOF meta-optic operating at visible wavelengths, with an aperture of 2 mm and focal range from 3.5 mm to 14.5 mm (286 diopters to 69 diopters), which is a elongation of the depth of focus relative to a standard lens. Depth-independent performance is shown by imaging at a range of finite conjugates, with a minimum spatial resolution of (50.8 cycles/mm). We also demonstrate operation of a directly integrated EDOF meta-optic camera module to evaluate imaging at multiple object distances, a functionality which would otherwise require a varifocal lens. -
Abstract Ultrathin meta-optics offer unmatched, multifunctional control of light. Next-generation optical technologies, however, demand unprecedented performance. This will likely require design algorithms surpassing the capability of human intuition. For the adjoint method, this requires explicitly deriving gradients, which is sometimes challenging for certain photonics problems. Existing techniques also comprise a patchwork of application-specific algorithms, each focused in scope and scatterer type. Here, we leverage algorithmic differentiation as used in artificial neural networks, treating photonic design parameters as trainable weights, optical sources as inputs, and encapsulating device performance in the loss function. By solving a complex, degenerate eigenproblem and formulating rigorous coupled-wave analysis as a computational graph, we support both arbitrary, parameterized scatterers and topology optimization. With iteration times below the cost of two forward simulations typical of adjoint methods, we generate multilayer, multifunctional, and aperiodic meta-optics. As an open-source platform adaptable to other algorithms and problems, we enable fast and flexible meta-optical design.
-
Abstract Nano-optic imagers that modulate light at sub-wavelength scales could enable new applications in diverse domains ranging from robotics to medicine. Although metasurface optics offer a path to such ultra-small imagers, existing methods have achieved image quality far worse than bulky refractive alternatives, fundamentally limited by aberrations at large apertures and low f-numbers. In this work, we close this performance gap by introducing a neural nano-optics imager. We devise a fully differentiable learning framework that learns a metasurface physical structure in conjunction with a neural feature-based image reconstruction algorithm. Experimentally validating the proposed method, we achieve an order of magnitude lower reconstruction error than existing approaches. As such, we present a high-quality, nano-optic imager that combines the widest field-of-view for full-color metasurface operation while simultaneously achieving the largest demonstrated aperture of 0.5 mm at an f-number of 2.
-
Many emerging, high-speed, reconfigurable optical systems are limited by routing complexity when producing dynamic, two-dimensional (2D) electric fields. We propose a gradient-based inverse-designed, static phase-mask doublet to generate arbitrary 2D intensity wavefronts using a one-dimensional (1D) intensity spatial light modulator (SLM). We numerically simulate the capability of mapping each point in a 49 element 1D array to a distinct
2D spatial distribution. Our proposed method will significantly relax the routing complexity of electrical control signals, possibly enabling high-speed, sub-wavelength 2D SLMs leveraging new materials and pixel architectures.