skip to main content

Title: Battery-Free Camera Occupancy Detection System
Occupancy detection systems are commonly equipped with high quality cameras and a processor with high computational power to run detection algorithms. This paper presents a human occupancy detection system that uses battery-free cameras and a deep learning model implemented on a low-cost hub to detect human presence. Our low-resolution camera harvests energy from ambient light and transmits data to the hub using backscatter communication. We implement the state-of-the-art YOLOv5 network detection algorithm that offers high detection accuracy and fast inferencing speed on a Raspberry Pi 4 Model B. We achieve an inferencing speed of ∼100ms per image and an overall detection accuracy of >90% with only 2GB CPU RAM on the Raspberry Pi. In the experimental results, we also demonstrate that the detection is robust to noise, illuminance, occlusion, and angle of depression.
Authors:
 ;  ;  ;  ;  ;  
Award ID(s):
1823148
Publication Date:
NSF-PAR ID:
10303866
Journal Name:
EMDL 2021: 5th International Workshop on Embedded and Mobile Deep Learning
Sponsoring Org:
National Science Foundation
More Like this
  1. Skateboarding as a method of transportation has become prevalent, which has increased the occurrence and likelihood of pedestrian–skateboarder collisions and near-collision scenarios in shared-use roadway areas. Collisions between pedestrians and skateboarders can result in significant injury. New approaches are needed to evaluate shared-use areas prone to hazardous pedestrian–skateboarder interactions, and perform real-time, in situ (e.g., on-device) predictions of pedestrian–skateboarder collisions as road conditions vary due to changes in land usage and construction. A mechanism called the Surrogate Safety Measures for skateboarder–pedestrian interaction can be computed to evaluate high-risk conditions on roads and sidewalks using deep learning object detection models. In this paper, we present the first ever skateboarder–pedestrian safety study leveraging deep learning architectures. We view and analyze state of the art deep learning architectures, namely the Faster R-CNN and two variants of the Single Shot Multi-box Detector (SSD) model to select the correct model that best suits two different tasks: automated calculation of Post Encroachment Time (PET) and finding hazardous conflict zones in real-time. We also contribute a new annotated data set that contains skateboarder–pedestrian interactions that has been collected for this study. Both our selected models can detect and classify pedestrians and skateboarders correctly and efficiently. However, duemore »to differences in their architectures and based on the advantages and disadvantages of each model, both models were individually used to perform two different set of tasks. Due to improved accuracy, the Faster R-CNN model was used to automate the calculation of post encroachment time, whereas to determine hazardous regions in real-time, due to its extremely fast inference rate, the Single Shot Multibox MobileNet V1 model was used. An outcome of this work is a model that can be deployed on low-cost, small-footprint mobile and IoT devices at traffic intersections with existing cameras to perform on-device inferencing for in situ Surrogate Safety Measurement (SSM), such as Time-To-Collision (TTC) and Post Encroachment Time (PET). SSM values that exceed a hazard threshold can be published to an Message Queuing Telemetry Transport (MQTT) broker, where messages are received by an intersection traffic signal controller for real-time signal adjustment, thus contributing to state-of-the-art vehicle and pedestrian safety at hazard-prone intersections.« less
  2. In recent years, robotic technologies, e.g. drones or autonomous cars have been applied to the agricultural sectors to improve the efficiency of typical agricultural operations. Some agricultural tasks that are ideal for robotic automation are yield estimation and robotic harvesting. For these applications, an accurate and reliable image-based detection system is critically important. In this work, we present a low-cost strawberry detection system based on convolutional neural networks. Ablation studies are presented to validate the choice of hyper- parameters, framework, and network structure. Additional modifications to both the training data and network structure that improve precision and execution speed, e.g., input compression, image tiling, color masking, and network compression, are discussed. Finally, we present a final network implementation on a Raspberry Pi 3B that demonstrates a detection speed of 1.63 frames per second and an average precision of 0.842.
  3. In this paper, we present the design and implementation of a smart irrigation system using Internet of Things (IoT) technology, which can be used for automating the irrigation process in agricultural fields. It is expected that this system would create a better opportunity for farmers to irrigate their fields efficiently, as well as eliminating the field's under-watering, which could stress the plants. The developed system is organized into three parts: sensing side, cloud side, and user side. We used Microsoft Azure IoT Hub as an underlying infrastructure to coordinate the interaction between the three sides. The sensing side uses a Raspberry Pi 3 device, which is a low-cost, credit-card sized computer device that is used to monitor in near real-time soil moisture, air temperature and relative humidity, and other weather parameters of the field of interest. Sensors readings are logged and transmitted to the cloud side. At the cloud side, the received sensing data is used by the irrigation scheduling model to determine when and for how long the water pump should be turned on based on a user-predefined threshold. The user side is developed as an Android mobile app, which is used to control the operations of the watermore »pump with voice recognition capabilities. Finally, this system was evaluated using various performance metrics, such as latency and scalability.« less
  4. he pervasive operation of customer drones, or small-scale unmanned aerial vehicles (UAVs), has raised serious concerns about their privacy threats to the public. In recent years, privacy invasion events caused by customer drones have been frequently reported. Given such a fact, timely detection of invading drones has become an emerging task. Existing solutions using active radar, video or acoustic sensors are usually too costly (especially for individuals) or exhibit various constraints (e.g., requiring visual line of sight). Recent research on drone detection with passive RF signals provides an opportunity for low-cost deployment of drone detectors on commodity wireless devices. However, the state of the arts in this direction rely on line-of-sight (LOS) RF signals, which makes them only work under very constrained conditions. The support of more common scenarios, i.e., non-line-of-sight (NLOS), is still missing for low-cost solutions. In this paper, we propose a novel detection system for privacy invasion caused by customer drone. Our system is featured with accurate NLOS detection with low-cost hardware (under $50). By exploring and validating the relationship between drone motions and RF signal under the NLOS condition, we find that RF signatures of drones are somewhat “amplified” by multipaths in NLOS. Based on thismore »observation, we design a two-step solution which first classifies received RSS measurements into LOS and NLOS categories; deep learning is then used to extract the signatures and ultimately detect the drones. Our experimental results show that LOS and NLOS signals can be identified at accuracy rates of 98.4% and 96% respectively. Our drone detection rate for NLOS condition is above 97% with a system implemented using Raspberry PI 3 B+.« less
  5. Detecting flash floods in real-time and taking rapid actions are of utmost importance to save human lives, loss of infrastructures, and personal properties in a smart city. In this paper, we develop a low-cost low-power cyber-physical System prototype using a Raspberry Pi camera to detect the rising water level. We deployed the system in the real word and collected data in different environmental conditions (early morning in the presence of fog, sunny afternoon, late afternoon with sunsetting). We employ image processing and text recognition techniques to detect the rising water level and articulate several challenges in deploying such a system in the real environment. We envision this prototype design will pave the way for mass deployment of the flash flood detection system with minimal human intervention.