skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Title: Optical spatial filtering with plasmonic directional image sensors

Photonics provides a promising approach for image processing by spatial filtering, with the advantage of faster speeds and lower power consumption compared to electronic digital solutions. However, traditional optical spatial filters suffer from bulky form factors that limit their portability. Here we present a new approach based on pixel arrays of plasmonic directional image sensors, designed to selectively detect light incident along a small, geometrically tunable set of directions. The resulting imaging systems can function as optical spatial filters without any external filtering elements, leading to extreme size miniaturization. Furthermore, they offer the distinct capability to perform multiple filtering operations at the same time, through the use of sensor arrays partitioned into blocks of adjacent pixels with different angular responses. To establish the image processing capabilities of these devices, we present a rigorous theoretical model of their filter transfer function under both coherent and incoherent illumination. Next, we use the measured angle-resolved responsivity of prototype devices to demonstrate two examples of relevant functionalities: (1) the visualization of otherwise invisible phase objects and (2) spatial differentiation with incoherent light. These results are significant for a multitude of imaging applications ranging from microscopy in biomedicine to object recognition for computer vision.

 
more » « less
Award ID(s):
1711156
PAR ID:
10375475
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optics Express
Volume:
30
Issue:
16
ISSN:
1094-4087; OPEXFF
Format(s):
Medium: X Size: Article No. 29074
Size(s):
Article No. 29074
Sponsoring Org:
National Science Foundation
More Like this
  1. Time-frequency (TF) filtering of analog signals has played a crucial role in the development of radio-frequency communications and is currently being recognized as an essential capability for communications, both classical and quantum, in the optical frequency domain. How best to design optical time-frequency (TF) filters to pass a targeted temporal mode (TM), and to reject background (noise) photons in the TF detection window? The solution for ‘coherent’ TF filtering is known—the quantum pulse gate—whereas the conventional, more common method is implemented by a sequence of incoherent spectral filtering and temporal gating operations. To compare these two methods, we derive a general formalism for two-stage incoherent time-frequency filtering, finding expressions for signal pulse transmission efficiency, and for the ability to discriminate TMs, which allows the blocking of unwanted background light. We derive the tradeoff between efficiency and TM discrimination ability, and find a remarkably concise relation between these two quantities and the time-bandwidth product of the combined filters. We apply the formalism to two examples—rectangular filters or Gaussian filters—both of which have known orthogonal-function decompositions. The formalism can be applied to any state of light occupying the input temporal mode, e.g., ‘classical’ coherent-state signals or pulsed single-photon states of light. In contrast to the radio-frequency domain, where coherent detection is standard and one can use coherent matched filtering to reject noise, in the optical domain direct detection is optimal in a number of scenarios where the signal flux is extremely small. Our analysis shows how the insertion loss and SNR change when one uses incoherent optical filters to reject background noise, followed by direct detection, e.g. photon counting. We point out implications in classical and quantum optical communications. As an example, we study quantum key distribution, wherein strong rejection of background noise is necessary to maintain a high quality of entanglement, while high signal transmission is needed to ensure a useful key generation rate.

     
    more » « less
  2. Angle-sensitive photodetectors are a promising device technology for many advanced imaging functionalities, including lensless compound-eye vision, lightfield sensing, optical spatial filtering, and phase imaging. Here we demonstrate the use of plasmonic gradient metasurfaces to tailor the angular response of generic planar photodetectors. The resulting devices rely on the phase-matched coupling of light incident at select geometrically tunable angles into guided plasmonic modes, which are then scattered and absorbed in the underlying photodetector active layer. This approach naturally introduces sharp peaks in the angular response, with smaller footprint and reduced guided-mode radiative losses (and therefore improved spatial resolution and sensitivity) compared to analogous devices based on diffractive coupling. More broadly, these results highlight a promising new application space of flat optics, where gradient metasurfaces are integrated within image sensors to enable unconventional capabilities with enhanced system miniaturization and design flexibility.

     
    more » « less
  3. We use specially designed plasmonic photodetectors to develop a new method for image differentiation that can produce edge-enhanced images without external optical elements and under incoherent illumination, unlike traditional optical spatial filters. 
    more » « less
  4. Active imaging and structured illumination originated in “bulk” optical systems: free-space beams controlled with lenses, spatial light modulators, gratings, and mirrors to structure the optical diffraction and direct the beams onto the target. Recently, optical phased arrays have been developed with the goal of replacing traditional bulk active imaging systems with integrated optical systems. In this paper, we demonstrate the first array of optical phased arrays forming a composite aperture. This composite aperture is used to implement a Fourier-based structured-illumination imaging system, where moving fringe patterns are projected on a target and a single integrating detector is used to reconstruct the spatial structure of the target from the time variation of the back-scattered light. We experimentally demonstrate proof-of-concept Fourier-basis imaging in 1D using a six-element array of optical phased arrays, which interfere pairwise to sample up to 11 different spatial Fourier components, and reconstruct a 1D delta-function target. This concept addresses a key complexity constraint in scaling up integrated photonic apertures by requiring onlyNelements in a sparse array to produce an image withN2resolvable spots.

     
    more » « less
  5. null (Ed.)
    Most modern commodity imaging systems we use directly for photography—or indirectly rely on for downstream applications—employ optical systems of multiple lenses that must balance deviations from perfect optics, manufacturing constraints, tolerances, cost, and footprint. Although optical designs often have complex interactions with downstream image processing or analysis tasks, today’s compound optics are designed in isolation from these interactions. Existing optical design tools aim to minimize optical aberrations, such as deviations from Gauss’ linear model of optics, instead of application-specific losses, precluding joint optimization with hardware image signal processing (ISP) and highly parameterized neural network processing. In this article, we propose an optimization method for compound optics that lifts these limitations. We optimize entire lens systems jointly with hardware and software image processing pipelines, downstream neural network processing, and application-specific end-to-end losses. To this end, we propose a learned, differentiable forward model for compound optics and an alternating proximal optimization method that handles function compositions with highly varying parameter dimensions for optics, hardware ISP, and neural nets. Our method integrates seamlessly atop existing optical design tools, such as Zemax . We can thus assess our method across many camera system designs and end-to-end applications. We validate our approach in an automotive camera optics setting—together with hardware ISP post processing and detection—outperforming classical optics designs for automotive object detection and traffic light state detection. For human viewing tasks, we optimize optics and processing pipelines for dynamic outdoor scenarios and dynamic low-light imaging. We outperform existing compartmentalized design or fine-tuning methods qualitatively and quantitatively, across all domain-specific applications tested. 
    more » « less