skip to main content

Title: Programmable black phosphorus image sensor for broadband optoelectronic edge computing
Abstract

Image sensors with internal computing capability enable in-sensor computing that can significantly reduce the communication latency and power consumption for machine vision in distributed systems and robotics. Two-dimensional semiconductors have many advantages in realizing such intelligent vision sensors because of their tunable electrical and optical properties and amenability for heterogeneous integration. Here, we report a multifunctional infrared image sensor based on an array of black phosphorous programmable phototransistors (bP-PPT). By controlling the stored charges in the gate dielectric layers electrically and optically, the bP-PPT’s electrical conductance and photoresponsivity can be locally or remotely programmed with 5-bit precision to implement an in-sensor convolutional neural network (CNN). The sensor array can receive optical images transmitted over a broad spectral range in the infrared and perform inference computation to process and recognize the images with 92% accuracy. The demonstrated bP image sensor array can be scaled up to build a more complex vision-sensory neural network, which will find many promising applications for distributed and remote multispectral sensing.

Authors:
; ; ;
Award ID(s):
2025489 1719797
Publication Date:
NSF-PAR ID:
10364090
Journal Name:
Nature Communications
Volume:
13
Issue:
1
ISSN:
2041-1723
Publisher:
Nature Publishing Group
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Objective. Patients with photovoltaic subretinal implant PRIMA demonstrated letter acuity ∼0.1 logMAR worse than sampling limit for 100μm pixels (1.3 logMAR) and performed slower than healthy subjects tested with equivalently pixelated images. To explore the underlying differences between natural and prosthetic vision, we compare the fidelity of retinal response to visual and subretinal electrical stimulation through single-cell modeling and ensemble decoding.Approach. Responses of retinal ganglion cells (RGCs) to optical or electrical white noise stimulation in healthy and degenerate rat retinas were recorded via multi-electrode array. Each RGC was fit with linear–nonlinear and convolutional neural network models. To characterize RGC noise, we compared statistics of spike-triggered averages (STAs) in RGCs responding to electrical or visual stimulation of healthy and degenerate retinas. At the population level, we constructed a linear decoder to determine the accuracy of the ensemble of RGCs onN-way discrimination tasks.Main results. Although computational models can match natural visual responses well (correlation ∼0.6), they fit significantly worse to spike timings elicited by electrical stimulation of the healthy retina (correlation ∼0.15). In the degenerate retina, response to electrical stimulation is equally bad. The signal-to-noise ratio of electrical STAs in degenerate retinas matched that of the natural responses when 78 ±more »6.5% of the spikes were replaced with random timing. However, the noise in RGC responses contributed minimally to errors in ensemble decoding. The determining factor in accuracy of decoding was the number of responding cells. To compensate for fewer responding cells under electrical stimulation than in natural vision, more presentations of the same stimulus are required to deliver sufficient information for image decoding.Significance. Slower-than-natural pattern identification by patients with the PRIMA implant may be explained by the lower number of electrically activated cells than in natural vision, which is compensated by a larger number of the stimulus presentations.

    « less
  2. Abstract

    As machine vision technology generates large amounts of data from sensors, it requires efficient computational systems for visual cognitive processing. Recently, in-sensor computing systems have emerged as a potential solution for reducing unnecessary data transfer and realizing fast and energy-efficient visual cognitive processing. However, they still lack the capability to process stored images directly within the sensor. Here, we demonstrate a heterogeneously integrated 1-photodiode and 1 memristor (1P-1R) crossbar for in-sensor visual cognitive processing, emulating a mammalian image encoding process to extract features from the input images. Unlike other neuromorphic vision processes, the trained weight values are applied as an input voltage to the image-saved crossbar array instead of storing the weight value in the memristors, realizing the in-sensor computing paradigm. We believe the heterogeneously integrated in-sensor computing platform provides an advanced architecture for real-time and data-intensive machine-vision applications via bio-stimulus domain reduction.

  3. Abstract

    Wavefront sensing is the simultaneous measurement of the amplitude and phase of an incoming optical field. Traditional wavefront sensors such as Shack-Hartmann wavefront sensor (SHWFS) suffer from a fundamental tradeoff between spatial resolution and phase estimation and consequently can only achieve a resolution of a few thousand pixels. To break this tradeoff, we present a novel computational-imaging-based technique, namely, the Wavefront Imaging Sensor with High resolution (WISH). We replace the microlens array in SHWFS with a spatial light modulator (SLM) and use a computational phase-retrieval algorithm to recover the incident wavefront. This wavefront sensor can measure highly varying optical fields at more than 10-megapixel resolution with the fine phase estimation. To the best of our knowledge, this resolution is an order of magnitude higher than the current noninterferometric wavefront sensors. To demonstrate the capability of WISH, we present three applications, which cover a wide range of spatial scales. First, we produce the diffraction-limited reconstruction for long-distance imaging by combining WISH with a large-aperture, low-quality Fresnel lens. Second, we show the recovery of high-resolution images of objects that are obscured by scattering. Third, we show that WISH can be used as a microscope without an objective lens. Our study suggestsmore »that the designing principle of WISH, which combines optical modulators and computational algorithms to sense high-resolution optical fields, enables improved capabilities in many existing applications while revealing entirely new, hitherto unexplored application areas.

    « less
  4. Abstract

    Conventional imaging and recognition systems require an extensive amount of data storage, pre-processing, and chip-to-chip communications as well as aberration-proof light focusing with multiple lenses for recognizing an object from massive optical inputs. This is because separate chips (i.e., flat image sensor array, memory device, and CPU) in conjunction with complicated optics should capture, store, and process massive image information independently. In contrast, human vision employs a highly efficient imaging and recognition process. Here, inspired by the human visual recognition system, we present a novel imaging device for efficient image acquisition and data pre-processing by conferring the neuromorphic data processing function on a curved image sensor array. The curved neuromorphic image sensor array is based on a heterostructure of MoS2and poly(1,3,5-trimethyl-1,3,5-trivinyl cyclotrisiloxane). The curved neuromorphic image sensor array features photon-triggered synaptic plasticity owing to its quasi-linear time-dependent photocurrent generation and prolonged photocurrent decay, originated from charge trapping in the MoS2-organic vertical stack. The curved neuromorphic image sensor array integrated with a plano-convex lens derives a pre-processed image from a set of noisy optical inputs without redundant data storage, processing, and communications as well as without complex optics. The proposed imaging device can substantially improve efficiency of the image acquisitionmore »and recognition process, a step forward to the next generation machine vision.

    « less
  5. Abstract

    The vision system of arthropods such as insects and crustaceans is based on the compound-eye architecture, consisting of a dense array of individual imaging elements (ommatidia) pointing along different directions. This arrangement is particularly attractive for imaging applications requiring extreme size miniaturization, wide-angle fields of view, and high sensitivity to motion. However, the implementation of cameras directly mimicking the eyes of common arthropods is complicated by their curved geometry. Here, we describe a lensless planar architecture, where each pixel of a standard image-sensor array is coated with an ensemble of metallic plasmonic nanostructures that only transmits light incident along a small geometrically-tunable distribution of angles. A set of near-infrared devices providing directional photodetection peaked at different angles is designed, fabricated, and tested. Computational imaging techniques are then employed to demonstrate the ability of these devices to reconstruct high-quality images of relatively complex objects.