skip to main content


Title: Plasmonic ommatidia for lensless compound-eye vision
Abstract

The vision system of arthropods such as insects and crustaceans is based on the compound-eye architecture, consisting of a dense array of individual imaging elements (ommatidia) pointing along different directions. This arrangement is particularly attractive for imaging applications requiring extreme size miniaturization, wide-angle fields of view, and high sensitivity to motion. However, the implementation of cameras directly mimicking the eyes of common arthropods is complicated by their curved geometry. Here, we describe a lensless planar architecture, where each pixel of a standard image-sensor array is coated with an ensemble of metallic plasmonic nanostructures that only transmits light incident along a small geometrically-tunable distribution of angles. A set of near-infrared devices providing directional photodetection peaked at different angles is designed, fabricated, and tested. Computational imaging techniques are then employed to demonstrate the ability of these devices to reconstruct high-quality images of relatively complex objects.

 
more » « less
Award ID(s):
1711156
NSF-PAR ID:
10153879
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Nature Communications
Volume:
11
Issue:
1
ISSN:
2041-1723
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The vision system of arthropods consists of a dense array of individual photodetecting elements across a curvilinear surface. This compound‐eye architecture could be a useful model for optoelectronic sensing devices that require a large field of view and high sensitivity to motion. Strategies that aim to mimic the compound‐eye architecture involve integrating photodetector pixels with a curved microlens, but their fabrication on a curvilinear surface is challenged by the use of standard microfabrication processes that are traditionally designed for planar, rigid substrates (e.g., Si wafers). Here, a fractal web design of a hemispherical photodetector array that contains an organic‐dye‐sensitized graphene hybrid composite is reported to serve as an effective photoactive component with enhanced light‐absorbing capabilities. The device is first fabricated on a planar Si wafer at the microscale and then transferred to transparent hemispherical domes with different curvatures in a deterministic manner. The unique structural property of the fractal web design provides protection of the device from damage by effectively tolerating various external loads. Comprehensive experimental and computational studies reveal the essential design features and optoelectronic properties of the device, followed by the evaluation of its utility in the measurement of both the direction and intensity of incident light.

     
    more » « less
  2. BACKGROUND Optical sensing devices measure the rich physical properties of an incident light beam, such as its power, polarization state, spectrum, and intensity distribution. Most conventional sensors, such as power meters, polarimeters, spectrometers, and cameras, are monofunctional and bulky. For example, classical Fourier-transform infrared spectrometers and polarimeters, which characterize the optical spectrum in the infrared and the polarization state of light, respectively, can occupy a considerable portion of an optical table. Over the past decade, the development of integrated sensing solutions by using miniaturized devices together with advanced machine-learning algorithms has accelerated rapidly, and optical sensing research has evolved into a highly interdisciplinary field that encompasses devices and materials engineering, condensed matter physics, and machine learning. To this end, future optical sensing technologies will benefit from innovations in device architecture, discoveries of new quantum materials, demonstrations of previously uncharacterized optical and optoelectronic phenomena, and rapid advances in the development of tailored machine-learning algorithms. ADVANCES Recently, a number of sensing and imaging demonstrations have emerged that differ substantially from conventional sensing schemes in the way that optical information is detected. A typical example is computational spectroscopy. In this new paradigm, a compact spectrometer first collectively captures the comprehensive spectral information of an incident light beam using multiple elements or a single element under different operational states and generates a high-dimensional photoresponse vector. An advanced algorithm then interprets the vector to achieve reconstruction of the spectrum. This scheme shifts the physical complexity of conventional grating- or interference-based spectrometers to computation. Moreover, many of the recent developments go well beyond optical spectroscopy, and we discuss them within a common framework, dubbed “geometric deep optical sensing.” The term “geometric” is intended to emphasize that in this sensing scheme, the physical properties of an unknown light beam and the corresponding photoresponses can be regarded as points in two respective high-dimensional vector spaces and that the sensing process can be considered to be a mapping from one vector space to the other. The mapping can be linear, nonlinear, or highly entangled; for the latter two cases, deep artificial neural networks represent a natural choice for the encoding and/or decoding processes, from which the term “deep” is derived. In addition to this classical geometric view, the quantum geometry of Bloch electrons in Hilbert space, such as Berry curvature and quantum metrics, is essential for the determination of the polarization-dependent photoresponses in some optical sensors. In this Review, we first present a general perspective of this sensing scheme from the viewpoint of information theory, in which the photoresponse measurement and the extraction of light properties are deemed as information-encoding and -decoding processes, respectively. We then discuss demonstrations in which a reconfigurable sensor (or an array thereof), enabled by device reconfigurability and the implementation of neural networks, can detect the power, polarization state, wavelength, and spatial features of an incident light beam. OUTLOOK As increasingly more computing resources become available, optical sensing is becoming more computational, with device reconfigurability playing a key role. On the one hand, advanced algorithms, including deep neural networks, will enable effective decoding of high-dimensional photoresponse vectors, which reduces the physical complexity of sensors. Therefore, it will be important to integrate memory cells near or within sensors to enable efficient processing and interpretation of a large amount of photoresponse data. On the other hand, analog computation based on neural networks can be performed with an array of reconfigurable devices, which enables direct multiplexing of sensing and computing functions. We anticipate that these two directions will become the engineering frontier of future deep sensing research. On the scientific frontier, exploring quantum geometric and topological properties of new quantum materials in both linear and nonlinear light-matter interactions will enrich the information-encoding pathways for deep optical sensing. In addition, deep sensing schemes will continue to benefit from the latest developments in machine learning. Future highly compact, multifunctional, reconfigurable, and intelligent sensors and imagers will find applications in medical imaging, environmental monitoring, infrared astronomy, and many other areas of our daily lives, especially in the mobile domain and the internet of things. Schematic of deep optical sensing. The n -dimensional unknown information ( w ) is encoded into an m -dimensional photoresponse vector ( x ) by a reconfigurable sensor (or an array thereof), from which w′ is reconstructed by a trained neural network ( n ′ = n and w′   ≈   w ). Alternatively, x may be directly deciphered to capture certain properties of w . Here, w , x , and w′ can be regarded as points in their respective high-dimensional vector spaces ℛ n , ℛ m , and ℛ n ′ . 
    more » « less
  3. Abstract Tectonic and seismogenic variations in subduction forearcs can be linked through various processes associated with subduction. Along the Cascadia forearc, significant variations between different geologic expressions of subduction appear to correlate, such as episodic tremor-and-slip (ETS) recurrence interval, intraslab seismicity, slab dip, uplift and exhumation rates, and topography, which allows for the systematic study of the plausible controlling mechanisms behind these variations. Even though the southern Cascadia forearc has the broadest topographic expression and shortest ETS recurrence intervals along the margin, it has been relatively underinstrumented with modern seismic equipment. Therefore, better seismic images are needed before robust comparisons with other portions of the forearc can be made. In March 2020, we deployed the Southern Cascadia Earthquake and Tectonics Array throughout the southern Cascadia forearc. This array consisted of 60 continuously recording three-component nodal seismometers with an average station spacing of ∼15 km, and stations recorded ∼38 days of data on average. We will analyze this newly collected nodal dataset to better image the structural characteristics and constrain the seismogenic behavior of the southern Cascadia forearc. The main goals of this project are to (1) constrain the precise location of the plate interface through seismic imaging and the analysis of seismicity, (2) characterize the lower crustal architecture of the overriding forearc crust to understand the role that this plays in enabling the high nonvolcanic tremor density and short episodic slow-slip recurrence intervals in the region, and (3) attempt to decouple the contributions of subduction versus San Andreas–related deformation to uplift along this particularly elevated portion of the Cascadia forearc. The results of this project will shed light on the controlling mechanisms behind heterogeneous ETS behavior and variable forearc surficial responses to subduction in Cascadia, with implications for other analogous subduction margins. 
    more » « less
  4. Abstract

    Hemispherical image sensors simplify lens designs, reduce optical aberrations, and improve image resolution for compact wide‐field‐of‐view cameras. To achieve hemispherical image sensors, organic materials are promising candidates due to the following advantages: tunability of optoelectronic/spectral response and low‐temperature low‐cost processes. Here, a photolithographic process is developed to prepare a hemispherical image sensor array using organic thin film photomemory transistors with a density of 308 pixels per square centimeter. This design includes only one photomemory transistor as a single active pixel, in contrast to the conventional pixel architecture, consisting of select/readout/reset transistors and a photodiode. The organic photomemory transistor, comprising light‐sensitive organic semiconductor and charge‐trapping dielectric, is able to achieve a linear photoresponse (light intensity range, from 1 to 50 W m−2), along with a responsivity as high as 1.6 A W−1(wavelength = 465 nm) for a dark current of 0.24 A m−2(drain voltage = −1.5 V). These observed values represent the best responsivity for similar dark currents among all the reported hemispherical image sensor arrays to date. A transfer method was further developed that does not damage organic materials for hemispherical organic photomemory transistor arrays. These developed techniques are scalable and are amenable for other high‐resolution 3D organic semiconductor devices.

     
    more » « less
  5. We report the development of a multichannel microscopy for whole‐slide multiplane, multispectral and phase imaging. We use trinocular heads to split the beam path into 6 independent channels and employ a camera array for parallel data acquisition, achieving a maximum data throughput of approximately 1 gigapixel per second. To perform single‐frame rapid autofocusing, we place 2 near‐infrared light‐emitting diodes (LEDs) at the back focal plane of the condenser lens to illuminate the sample from 2 different incident angles. A hot mirror is used to direct the near‐infrared light to an autofocusing camera. For multiplane whole‐slide imaging (WSI), we acquire 6 different focal planes of a thick specimen simultaneously. For multispectral WSI, we relay the 6 independent image planes to the same focal position and simultaneously acquire information at 6 spectral bands. For whole‐slide phase imaging, we acquire images at 3 focal positions simultaneously and use the transport‐of‐intensity equation to recover the phase information. We also provide an open‐source design to further increase the number of channels from 6 to 15. The reported platform provides a simple solution for multiplexed fluorescence imaging and multimodal WSI. Acquiring an instant focal stack without z‐scanning may also enable fast 3‐dimensional dynamic tracking of various biological samples.

     
    more » « less