skip to main content


Title: In-sensor image memorization and encoding via optical neurons for bio-stimulus domain reduction toward visual cognitive processing
Abstract

As machine vision technology generates large amounts of data from sensors, it requires efficient computational systems for visual cognitive processing. Recently, in-sensor computing systems have emerged as a potential solution for reducing unnecessary data transfer and realizing fast and energy-efficient visual cognitive processing. However, they still lack the capability to process stored images directly within the sensor. Here, we demonstrate a heterogeneously integrated 1-photodiode and 1 memristor (1P-1R) crossbar for in-sensor visual cognitive processing, emulating a mammalian image encoding process to extract features from the input images. Unlike other neuromorphic vision processes, the trained weight values are applied as an input voltage to the image-saved crossbar array instead of storing the weight value in the memristors, realizing the in-sensor computing paradigm. We believe the heterogeneously integrated in-sensor computing platform provides an advanced architecture for real-time and data-intensive machine-vision applications via bio-stimulus domain reduction.

 
more » « less
Award ID(s):
1942868
NSF-PAR ID:
10370717
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Nature Communications
Volume:
13
Issue:
1
ISSN:
2041-1723
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Conventional imaging and recognition systems require an extensive amount of data storage, pre-processing, and chip-to-chip communications as well as aberration-proof light focusing with multiple lenses for recognizing an object from massive optical inputs. This is because separate chips (i.e., flat image sensor array, memory device, and CPU) in conjunction with complicated optics should capture, store, and process massive image information independently. In contrast, human vision employs a highly efficient imaging and recognition process. Here, inspired by the human visual recognition system, we present a novel imaging device for efficient image acquisition and data pre-processing by conferring the neuromorphic data processing function on a curved image sensor array. The curved neuromorphic image sensor array is based on a heterostructure of MoS2and poly(1,3,5-trimethyl-1,3,5-trivinyl cyclotrisiloxane). The curved neuromorphic image sensor array features photon-triggered synaptic plasticity owing to its quasi-linear time-dependent photocurrent generation and prolonged photocurrent decay, originated from charge trapping in the MoS2-organic vertical stack. The curved neuromorphic image sensor array integrated with a plano-convex lens derives a pre-processed image from a set of noisy optical inputs without redundant data storage, processing, and communications as well as without complex optics. The proposed imaging device can substantially improve efficiency of the image acquisition and recognition process, a step forward to the next generation machine vision.

     
    more » « less
  2. Abstract

    Image sensors with internal computing capability enable in-sensor computing that can significantly reduce the communication latency and power consumption for machine vision in distributed systems and robotics. Two-dimensional semiconductors have many advantages in realizing such intelligent vision sensors because of their tunable electrical and optical properties and amenability for heterogeneous integration. Here, we report a multifunctional infrared image sensor based on an array of black phosphorous programmable phototransistors (bP-PPT). By controlling the stored charges in the gate dielectric layers electrically and optically, the bP-PPT’s electrical conductance and photoresponsivity can be locally or remotely programmed with 5-bit precision to implement an in-sensor convolutional neural network (CNN). The sensor array can receive optical images transmitted over a broad spectral range in the infrared and perform inference computation to process and recognize the images with 92% accuracy. The demonstrated bP image sensor array can be scaled up to build a more complex vision-sensory neural network, which will find many promising applications for distributed and remote multispectral sensing.

     
    more » « less
  3. Abstract

    In‐sensor computing is an emerging architectural paradigm that fuses data acquisition and processing within a sensory domain. The integration of multiple functions into a single domain reduces the system footprint while it minimizes the energy and time for data transfer between sensory and computing units. However, it is challenging for a simple and compact image sensor array to achieve both sensing and computing in each pixel. Here, this work demonstrates a focal plane array with a heterogeneously integrated one‐photodiode one‐resistor (1P‐1R)‐based artificial optical neuron that emulates the sensing, computing, and memorization of a biological retina system. This work employs an InGaAs photodiode featuring a high responsivity and a broad spectrum that covers near‐infrared (NIR) signals and employs an HfO2memristor as the artificial synapse to achieve the computing/memorization in an analog domain. Using the fabricated focal plane array integrated with an artificial neural network, this work performs in‐sensor image identification of finger veins driven by NIR light illumination (≈84 % accuracy). The proposed in‐sensor image computing architecture that broadly covers the NIR spectrum offers widespread application of focal plane array for computer vision, neuromorphic computing, biomedical engineering, etc.

     
    more » « less
  4. Abstract

    In in-sensor image preprocessing, the sensed image undergoes low level processing like denoising at the sensor end, similar to the retina of human eye. Optoelectronic synapse devices are potential contenders for this purpose, and subsequent applications in artificial neural networks (ANNs). The optoelectronic synapses can offer image pre-processing functionalities at the pixel itself—termed as in-pixel computing. Denoising is an important problem in image preprocessing and several approaches have been used to denoise the input images. While most of those approaches require external circuitry, others are efficient only when the noisy pixels have significantly lower intensity compared to the actual pattern pixels. In this work, we present the innate ability of an optoelectronic synapse array to perform denoising at the pixel itself once it is trained to memorize an image. The synapses consist of phototransistors with bilayer MoS2channel and p-Si/PtTe2buried gate electrode. Our 7 × 7 array shows excellent robustness to noise due to the interplay between long-term potentiation and short-term potentiation. This bio-inspired strategy enables denoising of noise with higher intensity than the memorized pattern, without the use of any external circuitry. Specifically, due to the ability of these synapses to respond distinctively to wavelengths from 300 nm in ultraviolet to 2 µm in infrared, the pixel array also denoises mixed-color interferences. The “self-denoising” capability of such an artificial visual array has the capacity to eliminate the need for raw data transmission and thus, reduce subsequent image processing steps for supervised learning.

     
    more » « less
  5. Diffractive optical neural networks (DONNs) are emerging as high‐throughput and energy‐efficient hardware platforms to perform all‐optical machine learning (ML) in machine vision systems. However, the current demonstrated applications of DONNs are largely image classification tasks, which undermine the prospect of developing and utilizing such hardware for other ML applications. Herein, the deployment of an all‐optical reconfigurable DONNs system for scientific computing is demonstrated numerically and experimentally, including guiding two‐dimensional quantum material synthesis, predicting the properties of two‐dimensional quantum materials and small molecular cancer drugs, predicting the device response of nanopatterned integrated photonic power splitters, and the dynamic stabilization of an inverted pendulum with reinforcement learning. Despite a large variety of input data structures, a universal feature engineering approach is developed to convert categorical input features to images that can be processed in the DONNs system. The results open up new opportunities for employing DONNs systems for a broad range of ML applications.

     
    more » « less