Miniature lenses with a tunable focus are essential components for many modern applications involving compact optical systems. While several tunable lenses have been reported with various tuning mechanisms, they often face challenges with respect to power consumption, tuning speed, fabrication cost, or production scalability. In this work, we have adapted the mechanism of an Alvarez lens – a varifocal composite lens in which lateral shifts of two optical elements with cubic phase surfaces give rise to a change in the optical power – to construct a miniature, microelectromechanical system (MEMS)-actuated metasurface Alvarez lens. Implementation based on an electrostatic MEMS generates fast and controllable actuation with low power consumption. The utilization of metasurfaces – ultrathin and subwavelength-patterned diffractive optics – as optical elements greatly reduces the device volume compared to systems using conventional freeform lenses. The entire MEMS Alvarez metalens is fully compatible with modern semiconductor fabrication technologies, granting it the potential to be mass-produced at a low unit cost. In the reported prototype operating at 1550 nm wavelength, a total uniaxial displacement of 6.3 µm was achieved in the Alvarez metalens with a direct-current (DC) voltage application up to 20 V, which modulated the focal position within a total tuning range of 68 µm, producing more than an order of magnitude change in the focal length and a 1460-diopter change in the optical power. The MEMS Alvarez metalens has a robust design that can potentially generate a much larger tuning range without substantially increasing the device volume or energy consumption, making it desirable for a wide range of imaging and display applications.
This content will become publicly available on August 1, 2024
- Award ID(s):
- NSF-PAR ID:
- Date Published:
- Journal Name:
- ACM Transactions on Graphics
- Page Range / eLocation ID:
- 1 to 18
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
Flat lenses with focal length tunability can enable the development of highly integrated imaging systems. This work explores machine learning to inverse design a multifocal multilevel diffractive lens (MMDL) by wavelength multiplexing. The MMDL output is multiplexed in three color channels, red (650 nm), green (550 nm), and blue (450 nm), to achieve varied focal lengths of 4 mm, 20 mm, and 40 mm at these three color channels, respectively. The focal lengths of the MMDL scale significantly with the wavelength in contrast to conventional diffractive lenses. The MMDL consists of concentric rings with equal widths and varied heights. The machine learning method is utilized to optimize the height of each concentric ring to obtain the desired phase distribution so as to achieve varied focal lengths multiplexed by wavelengths. The designed MMDL is fabricated through a direct-write laser lithography system with gray-scale exposure. The demonstrated singlet lens is miniature and polarization insensitive, and thus can potentially be applied in integrated optical imaging systems to achieve zooming functions.
Many hardware approaches have been developed for implementing hyperspectral imaging on fluorescence microscope systems; each with tradeoffs in spectral sensitivity and spectral, spatial, and temporal sampling. For example, tunable filter-based systems typically have limited wavelength switching speeds and sensitivities that preclude high-speed spectral imaging. Here, we present a novel approach combining multiple illumination wavelengths using solid state LEDs in a 2-mirror configuration similar to a Cassegrain reflector assembly. This approach provides spectral discrimination by scanning a range of fluorescence excitation wavelengths, which we have previously shown can improve spectral image acquisition time compared to traditional fluorescence emission-scanning hyperspectral imaging. In this work, the geometry of the LED and other optical components was optimized. A model of the spectral illuminator was designed using TracePro ray tracing software (Lambda Research Corp.) that included an emitter, lens, Spherical mirror, flat mirror, and liquid light guide input. A parametric sensitivity study was performed to optimize the optical throughput varying the LED viewing angle, properties of the Spherical reflectors, the lens configuration, focal length, and position. The following factors significantly affected the optical throughput: LED viewing angle, lens position, and lens focal length. Several types of configurations were evaluated, and an optimized lens and LED position were determined. Initial optimization results indicate that a 10% optical transmission can be achieved for either a 16 or 32 wavelength system. Future work will include continuing to optimize the ray trace model, prototyping, and experimental testing of the optimized configuration.more » « less
A lens performs an approximately one-to-one mapping from the object to the image plane. This mapping in the image plane is maintained within a depth of field (or referred to as depth of focus, if the object is at infinity). This necessitates refocusing of the lens when the images are separated by distances larger than the depth of field. Such refocusing mechanisms can increase the cost, complexity, and weight of imaging systems. Here we show that by judicious design of a multi-level diffractive lens (MDL) it is possible to drastically enhance the depth of focus by over 4 orders of magnitude. Using such a lens, we are able to maintain focus for objects that are separated by as large a distance as
in our experiments. Specifically, when illuminated by collimated light at , the MDL produced a beam, which remained in focus from 5 to 1200 mm. The measured full width at half-maximum of the focused beam varied from 6.6 µm (5 mm away from the MDL) to 524 µm (1200 mm away from the MDL). Since the side lobes were well suppressed and the main lobe was close to the diffraction limit, imaging with a horizontal × vertical field of view of over the entire focal range was possible. This demonstration opens up a new direction for lens design, where by treating the phase in the focal plane as a free parameter, extreme-depth-of-focus imaging becomes possible.
A conventional optical lens can enhance lateral resolution in optical coherence tomography (OCT) by focusing the input light onto the sample. However, the typical Gaussian beam profile of such a lens will impose a tradeoff between the depth of focus (DOF) and the lateral resolution. The lateral resolution is often compromised to achieve a
mm-scale DOF. We have experimentally shown that using a cascade system of an ultrasonic virtual tunable optical waveguide (UVTOW) and a short focal-length lens can provide a large DOF without severely compromising the lateral resolution compared to an external lens with the same effective focal length. In addition, leveraging the reconfigurability of UVTOW, we show that the focal length of the cascade system can be tuned without the need for mechanical translation of the optical lens. We compare the performance of the cascade system with a conventional optical lens to demonstrate enhanced DOF without compromising the lateral resolution as well as reconfigurability of UVTOW for OCT imaging.