skip to main content


This content will become publicly available on August 16, 2024

Title: Non-Line-of-Sight Detection Based on Neuromorphic Time-of-Flight Sensing
Non-line-of-sight (NLOS) detection and ranging aim to identify hidden objects by sensing indirect light reflections. Although numerous computational methods have been proposed for NLOS detection and imaging, the post-signal processing required by peripheral circuits remains complex. One possible solution for simplifying NLOS detection and ranging involves the use of neuromorphic devices, such as memristors, which have intrinsic resistive-switching capabilities and can store spatiotemporal information. In this study, we employed the memristive spike-timing-dependent plasticity learning rule to program the time-of-flight (ToF) depth information directly into a memristor medium. By coupling the transmitted signal from the source with the photocurrent from the target object into a single memristor unit, we were able to induce a tunable programming pulse based on the time interval between the two signals that were superimposed. Here, this neuromorphic ToF principle is employed to detect and range NLOS objects without requiring complex peripheral circuitry to process raw signals. We experimentally demonstrated the effectiveness of the neuromorphic ToF principle by integrating a HfO2 memristor and an avalanche photodiode to detect NLOS objects in multiple directions. This technology has potential applications in various fields, such as automotive navigation, machine learning, and biomedical engineering.  more » « less
Award ID(s):
1942868
NSF-PAR ID:
10488895
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Publisher / Repository:
ACS Publications
Date Published:
Journal Name:
ACS Photonics
Volume:
10
Issue:
8
ISSN:
2330-4022
Page Range / eLocation ID:
2739 to 2745
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Video scene analysis is a well-investigated area where researchers have devoted efforts to detect and classify people and objects in the scene. However, real-life scenes are more complex: the intrinsic states of the objects (e.g., machine operating states or human vital signals) are often overlooked by vision-based scene analysis. Recent work has proposed a radio frequency (RF) sensing technique, wireless vibrometry, that employs wireless signals to sense subtle vibrations from the objects and infer their internal states. We envision that the combination of video scene analysis with wireless vibrometry form a more comprehensive understanding of the scene, namely "rich scene analysis". However, the RF sensors used in wireless vibrometry only provide time series, and it is challenging to associate these time series data with multiple real-world objects. We propose a real-time RF-vision sensor fusion system, Capricorn, that efficiently builds a cross-modal correspondence between visual pixels and RF time series to better understand the complex natures of a scene. The vision sensors in Capricorn model the surrounding environment in 3D and obtain the distances of different objects. In the RF domain, the distance is proportional to the signal time-of-flight (ToF), and we can leverage the ToF to separate the RF time series corresponding to each object. The RF-vision sensor fusion in Capricorn brings multiple benefits. The vision sensors provide environmental contexts to guide the processing of RF data, which helps us select the most appropriate algorithms and models. Meanwhile, the RF sensor yields additional information that is originally invisible to vision sensors, providing insight into objects' intrinsic states. Our extensive evaluations show that Capricorn real-timely monitors multiple appliances' operating status with an accuracy of 97%+ and recovers vital signals like respirations from multiple people. A video (https://youtu.be/b-5nav3Fi78) demonstrates the capability of Capricorn. 
    more » « less
  2.  
    more » « less
  3. he pervasive operation of customer drones, or small-scale unmanned aerial vehicles (UAVs), has raised serious concerns about their privacy threats to the public. In recent years, privacy invasion events caused by customer drones have been frequently reported. Given such a fact, timely detection of invading drones has become an emerging task. Existing solutions using active radar, video or acoustic sensors are usually too costly (especially for individuals) or exhibit various constraints (e.g., requiring visual line of sight). Recent research on drone detection with passive RF signals provides an opportunity for low-cost deployment of drone detectors on commodity wireless devices. However, the state of the arts in this direction rely on line-of-sight (LOS) RF signals, which makes them only work under very constrained conditions. The support of more common scenarios, i.e., non-line-of-sight (NLOS), is still missing for low-cost solutions. In this paper, we propose a novel detection system for privacy invasion caused by customer drone. Our system is featured with accurate NLOS detection with low-cost hardware (under $50). By exploring and validating the relationship between drone motions and RF signal under the NLOS condition, we find that RF signatures of drones are somewhat “amplified” by multipaths in NLOS. Based on this observation, we design a two-step solution which first classifies received RSS measurements into LOS and NLOS categories; deep learning is then used to extract the signatures and ultimately detect the drones. Our experimental results show that LOS and NLOS signals can be identified at accuracy rates of 98.4% and 96% respectively. Our drone detection rate for NLOS condition is above 97% with a system implemented using Raspberry PI 3 B+. 
    more » « less
  4. Associative memory is a widespread self-learning method in biological livings, which enables the nervous system to remember the relationship between two concurrent events. The significance of rebuilding associative memory at a behavior level is not only to reveal a way of designing a brain-like self-learning neuromorphic system but also to explore a method of comprehending the learning mechanism of a nervous system. In this paper, an associative memory learning at a behavior level is realized that successfully associates concurrent visual and auditory information together (pronunciation and image of digits). The task is achieved by associating the large-scale artificial neural networks (ANNs) together instead of relating multiple analog signals. In this way, the information carried and preprocessed by these ANNs can be associated. A neuron has been designed, named signal intensity encoding neurons (SIENs), to encode the output data of the ANNs into the magnitude and frequency of the analog spiking signals. Then, the spiking signals are correlated together with an associative neural network, implemented with a three-dimensional (3-D) memristor array. Furthermore, the selector devices in the traditional memristor cells limiting the design area have been avoided by our novel memristor weight updating scheme. With the novel SIENs, the 3-D memristive synapse, and the proposed memristor weight updating scheme, the simulation results demonstrate that our proposed associative memory learning method and the corresponding circuit implementations successfully associate the pronunciation and image of digits together, which mimics a human-like associative memory learning behavior. 
    more » « less
  5. This paper presents a cost-effective, non-intrusive, and easy-to-deploy traffic count data collection method using two-dimensional light-detection and ranging (LiDAR) technology. The proposed method integrates a LiDAR sensor, continuous wavelet transform (CWT), and support vector machine (SVM) into a single framework for traffic count. LiDAR is adopted since the technology is economical and easily accessible. Moreover, its 360° visibility and accurate distance information make it more reliable compared with radar, which uses electromagnetic waves instead of light rays. The obtained distance data are converted into the signals. CWT is employed to detect any deviation in distance profile, because of its efficiency in detecting modest changes over a period of time. SVM is one of the supervised machine learning tools for data classification and regression. In the methodology, the SVM is applied to classify the distance data points obtained from the sensor into detection and non-detection cases, which are highly complex. Proof-of-concept (POC) test is conducted in three different places in Newark, New Jersey, to examine the performance of the proposed method. The POC test results demonstrate that the proposed method achieves acceptable performances in vehicle count collection, resulting in 83–94% accuracy. It is discovered that the accuracy of the proposed method is affected by the color of the exterior surface of a vehicle.

     
    more » « less