Neuromorphic computing systems promise high energy efficiency and low latency. In particular, when integrated with neuromorphic sensors, they can be used to produce intelligent systems for a broad range of applications. An event‐based camera is such a neuromorphic sensor, inspired by the sparse and asynchronous spike representation of the biological visual system. However, processing the event data requires either using expensive feature descriptors to transform spikes into frames, or using spiking neural networks (SNNs) that are expensive to train. In this work, a neural network architecture is proposed, reservoir nodes‐enabled neuromorphic vision sensing network (RN‐Net), based on dynamic temporal encoding by on‐sensor reservoirs and simple deep neural network (DNN) blocks. The reservoir nodes enable efficient temporal processing of asynchronous events by leveraging the native dynamics of the node devices, while the DNN blocks enable spatial feature processing. Combining these blocks in a hierarchical structure, the RN‐Net offers efficient processing for both local and global spatiotemporal features. RN‐Net executes dynamic vision tasks created by event‐based cameras at the highest accuracy reported to date at one order of magnitude smaller network size. The use of simple DNN and standard backpropagation‐based training rules further reduces implementation and training costs.
more »
« less
Parallelizing analog in-sensor visual processing with arrays of gate-tunable silicon photodetectors
Abstract In-sensor processing of dynamic and static information of visual objects avoids exchanging redundant data between physically separated sensing and computing units, holding promise for computer vision hardware. To this end, gate-tunable photodetectors, if built in a highly scalable array form, would lend themselves to large-scale in-sensor visual processing because of their potential in volume production and hence, parallel operation. Here we present two scalable in-sensor visual processing arrays based on dual-gate silicon photodiodes, enabling parallelized event sensing and edge detection, respectively. Both arrays are built in CMOS compatible processes and operated with zero static power. Furthermore, their bipolar analog output captures the amplitude of event-driven light changes and the spatial convolution of optical power densities at the device level, a feature that helps boost their performance in classifying dynamic motions and static images. Capable of processing both temporal and spatial visual information, these retinomorphic arrays suggest a path towards large-scale in-sensor visual processing systems for high-throughput computer vision.
more »
« less
- PAR ID:
- 10592078
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- Nature Communications
- Volume:
- 16
- Issue:
- 1
- ISSN:
- 2041-1723
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Photonics provides a promising approach for image processing by spatial filtering, with the advantage of faster speeds and lower power consumption compared to electronic digital solutions. However, traditional optical spatial filters suffer from bulky form factors that limit their portability. Here we present a new approach based on pixel arrays of plasmonic directional image sensors, designed to selectively detect light incident along a small, geometrically tunable set of directions. The resulting imaging systems can function as optical spatial filters without any external filtering elements, leading to extreme size miniaturization. Furthermore, they offer the distinct capability to perform multiple filtering operations at the same time, through the use of sensor arrays partitioned into blocks of adjacent pixels with different angular responses. To establish the image processing capabilities of these devices, we present a rigorous theoretical model of their filter transfer function under both coherent and incoherent illumination. Next, we use the measured angle-resolved responsivity of prototype devices to demonstrate two examples of relevant functionalities: (1) the visualization of otherwise invisible phase objects and (2) spatial differentiation with incoherent light. These results are significant for a multitude of imaging applications ranging from microscopy in biomedicine to object recognition for computer vision.more » « less
-
Abstract Neuromorphic computing, exemplified by breakthroughs in machine vision through concepts like address-event representation and send-on-delta sampling, has revolutionised sensor technology, enabling low-latency and high dynamic range perception with minimal bandwidth. While these advancements are prominent in vision and auditory perception, their potential in machine olfaction remains under-explored, particularly in the context of fast sensing. Here, we outline the perspectives for neuromorphic principles in machine olfaction. Considering the physical characteristics of turbulent odour environments, we argue that event-driven signal processing is optimally suited to the inherent properties of olfactory signals. We highlight the lack of bandwidth limitation due to turbulent dispersal processes, the characteristic temporal and chemical sparsity, as well as the high information density of the odour landscape. Further, we critically review and discuss the literature on neuromorphic olfaction; particularly focusing on neuromorphic principles such as event generation algorithms, information encoding mechanisms, event processing schemes (spiking neural networks), and learning. We discuss that the application of neuromorphic principles may significantly enhance response time and task performance in robotic olfaction, enabling autonomous systems to perform complex tasks in turbulent environments—such as environmental monitoring, odour guided search and rescue operations, and hazard detection.more » « less
-
Intelligent systems commonly employ vision sensors like cameras to analyze a scene. Recent work has proposed a wireless sensing technique, wireless vibrometry, to enrich the scene analysis generated by vision sensors. Wireless vibrometry employs wireless signals to sense subtle vibrations from the objects and infer their internal states. However, it is difficult for pure Radio-Frequency (RF) sensing systems to obtain objects' visual appearances (e.g., object types and locations), especially when an object is inactive. Thus, most existing wireless vibrometry systems assume that the number and the types of objects in the scene are known. The key to getting rid of these presumptions is to build a connection between wireless sensor time series and vision sensor images. We present Capricorn, a vision-guided wireless vibrometry system. In Capricorn, the object type information from vision sensors guides the wireless vibrometry system to select the most appropriate signal processing pipeline. The object tracking capability in computer vision also helps wireless systems efficiently detect and separate vibrations from multiple objects in real time.more » « less
-
Event cameras, which feature pixels that independently respond to changes in brightness, are becoming increasingly popular in high- speed applications due to their lower latency, reduced bandwidth requirements, and enhanced dynamic range compared to traditional frame- based cameras. Numerous imaging and vision techniques have leveraged event cameras for high- speed scene understanding by capturing high- framerate, high- dynamic range videos, primarily utilizing the temporal advantages inherent to event cameras. Additionally, imaging and vision techniques have utilized the light field—a complementary dimension to temporal information—for enhanced scene understanding.In this work, we propose "Event Fields", a new approach that utilizes innovative optical designs for event cameras to capture light fields at high speed. We develop the underlying mathematical framework for Event Fields and introduce two foundational frameworks to capture them practically: spatial multiplexing to capture temporal derivatives and temporal multiplexing to capture angular derivatives. To realize these, we design two complementary optical setups— one using a kaleidoscope for spatial multiplexing and another using a galvanometer for temporal multiplexing. We evaluate the performance of both designs using a custom-built simulator and real hardware prototypes, showcasing their distinct benefits. Our event fields unlock the full advantages of typical light fields—like post- capture refocusing and depth estimation—now supercharged for high- speed and high- dynamic range scenes. This novel light- sensing paradigm opens doors to new applications in photography, robotics, and AR/VR, and presents fresh challenges in rendering and machine learning.more » « less
An official website of the United States government
