skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Machine Learning in Sensors for Collision Avoidance
Abstract— Currently available automotive radars are designed to stream real-time 2D image data over high-speed links to a central ADAS (Advance Driver-Assistance System) computer for object recognition, which considerably contributes to the system’s power consumption and complexity. This paper presents a preliminary work for the implementation of a new in-sensor computer architecture to extract representative features from raw sensor data to detect and identify objects with radar signals. Such new architecture makes it possible to reduce the data transferred between sensors and the central ADAS computer significantly, giving rise to significant energy savings and latency reductions, while still maintaining sufficient accuracy and preserving image details. An experimental prototype has been built using the Texas Instruments AWR1243 Frequency-Modulated Continuous Wave (FMCW) radar board. We carried out experiments using the prototype to collect radar images, to preprocess raw data, and to transfer feature vectors to the central ADAS computer for classification and object detection. Two different approaches will be presented in this paper: First, a vanilla autoencoder will demonstrate the possibility of data reduction on radar signals. Second, a convolutional neural network based cross-domain deep learning architecture is presented by using a sample dataset to show the feasibility of computing Range-Angle Heatmaps directly on the sensor board eliminating the need for the raw data preprocessing on the central ADAS computer. We show that the reconstruction of Range-Angle Heatmaps can be predicted with a very high accuracy by leveraging deep learning architectures. Implementation of such a deep learning architecture on the sensor board can reduce the amount of data transferred from sensors to the central ADAS computer implying great potential for an energy efficient deep learning architecture in such environments.  more » « less
Award ID(s):
2106750 2027069
PAR ID:
10511649
Author(s) / Creator(s):
; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
Proceedings of International Conference on Computing, Networking and Communications (ICNC 2024)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Vehicle detection with visual sensors like lidar and camera is one of the critical functions enabling autonomous driving. While they generate fine-grained point clouds or high-resolution images with rich information in good weather conditions, they fail in adverse weather (e.g., fog) where opaque particles distort lights and significantly reduce visibility. Thus, existing methods relying on lidar or camera experience significant performance degradation in rare but critical adverse weather conditions. To remedy this, we resort to exploiting complementary radar, which is less impacted by adverse weather and becomes prevalent on vehicles. In this paper, we present Multimodal Vehicle Detection Network (MVDNet), a two-stage deep fusion detector, which first generates proposals from two sensors and then fuses region-wise features between multimodal sensor streams to improve final detection results. To evaluate MVDNet, we create a procedurally generated training dataset based on the collected raw lidar and radar signals from the open-source Oxford Radar Robotcar. We show that the proposed MVDNet surpasses other state-of-the-art methods, notably in terms of Average Precision (AP), especially in adverse weather conditions. The code and data are available at https://github.com/qiank10/MVDNet. 
    more » « less
  2. AVs are affected by reduced maneuverability and performance due to the degradation of sensor performances in fog. Such degradation can cause significant object detection errors in AVs’ safety-critical conditions. For instance, YOLOv5 performs well under favorable weather but is affected by mis-detections and false positives due to atmospheric scattering caused by fog particles. The existing deep object detection techniques often exhibit a high degree of accuracy. Their drawback is being sluggish in object detection in fog. Object detection methods with a fast detection speed have been obtained using deep learning at the expense of accuracy. The problem of the lack of balance between detection speed and accuracy in fog persists. This paper presents an improved YOLOv5-based multi-sensor fusion network that combines radar object detection with a camera image bounding box. We transformed radar detection by mapping the radar detections into a two-dimensional image coordinate and projected the resultant radar image onto the camera image. Using the attention mechanism, we emphasized and improved the important feature representation used for object detection while reducing high-level feature information loss. We trained and tested our multi-sensor fusion network on clear and multi-fog weather datasets obtained from the CARLA simulator. Our results show that the proposed method significantly enhances the detection of small and distant objects. Our small CR-YOLOnet model best strikes a balance between accuracy and speed, with an accuracy of 0.849 at 69 fps. 
    more » « less
  3. Kehtarnavaz, Nasser; Shirvaikar, Mukul V. (Ed.)
    Internet of Things (IoT) uses cloud-enabled data sharing to connect physical objects to sensors, processing software, and other technologies via the Internet. IoT allows a vast network of communication amongst these physical objects and their corresponding data. This study investigates the use of an IoT development board for real-time sensor data communication and processing, specifically images from a camera. The IoT development board and camera are programmed to capture images for object detection and analysis. Data processing is performed on board which includes the microcontroller and wireless communication with the sensor. The IoT connectivity and simulated test results to verify real-time signal communication and processing will be presented. 
    more » « less
  4. Recent advances in retinal neuroscience have fueled various hardware and algorithmic efforts to develop retina- inspired solutions for computer vision tasks. In this work, we focus on a fundamental visual feature within the mammalian retina, Object Motion Sensitivity (OMS). Using DVS data from EV-IMO dataset, we analyze the performance of an algorithmic implementation of OMS circuitry for motion segmentation in presence of ego-motion. This holistic analysis considers the underlying constraints arising from the hardware circuit implementation. We present novel CMOS circuits that implement OMS functionality inside image sensors, while providing run-time re-configurability for key algorithmic parameters. In-sensor technologies for dynamical environment adaptation are crucial for ensuring high system performance. Finally, we verify the functionality and re-configurability of the proposed CMOS circuit designs through Cadence simulations in 180nm technology. In summary, the presented work lays foundation for hardware- algorithm re-engineering of known biological circuits to suit application needs. 
    more » « less
  5. Capacitive sensing technology is widely applied in ubiquitous sensing. Its low-power consumption enables it to be used in a wide variety of Industry 4.0 applications. Capacitive Sensors can be combined into Arrays (CSAs) with mutual capacitive sensing to reduce external wiring requirements. For instance, the Texas Instruments (TI) MSP430FR2676 can capture and process data from 8×8 capacitive sensor grids. However, it is limited to supporting only 64 sensors. We propose a design incorporating daisy-chaining of CSAs via the I2C serial protocol to enable support for 256 sensors. We also demonstrate a rapid prototyping implementation of 128 sensors. The extended work we plan is to implement the prototype on custom Printed Circuit Boards (PCB) and maximize data update frequency. This architecture can find relevance in industries like manufacturing and farming, enhancing precision in the interaction between robots and humans/objects. 
    more » « less