The amount of greenhouse gas emissions from streetlights is equivalent to 2.6 million cars with as many as 26 million streetlights in the United States. The proposed IoT controller integrates sensors to make these streetlights as hubs for smart environment monitoring with effective energy usage. Conservation of energy is one of the main concerns in the modern era, and energy coming from the sun can be utilized efficiently alongside a smart streetlight management system instead of conventional streetlight management techniques. Additionally, with streetlights being present throughout a city, the opportunity to collect city-wide weather data is proposed. To this end, a solar-powered IoT-based smart street lighting and environmental monitoring system is proposed. The proposed energy-efficient IoT-based system uses a microcontroller to control light-emitting diode (LED) streetlights depending on lighting conditions and vehicle detection, ensuring that the streetlights can be turned on when needed.
more »
« less
An Attack-Resilient and Energy-Adaptive Monitoring System for Smart Farms
In this work, we propose an energy-adaptive moni-toring system for a solar sensor-based smart animal farm (e.g., cattle). The proposed smart farm system aims to maintain high-quality monitoring services by solar sensors with limited and fluctuating energy against a full set of cyberattack behaviors including false data injection, message dropping, or protocol non-compliance. We leverage Subjective Logic (SL) as the belief model to consider different types of uncertainties in opinions about sensed data. We develop two Deep Reinforcement Learning (D RL) schemes leveraging the design concept of uncertainty maximization in SL for DRL agents running on gateways to collect high-quality sensed data with low uncertainty and high freshness. We assess the performance of the proposed energy-adaptive smart farm system in terms of accumulated reward, monitoring error, system overload, and battery maintenance level. We compare the performance of the two DRL schemes developed (i.e., multi-agent deep Q-Iearning, MADQN, and multi-agent proximal policy optimization, MAPPO) with greedy and random baseline schemes in choosing the set of sensed data to be updated to collect high-quality sensed data to achieve resilience against attacks. Our experiments demonstrate that MAPPO with the uncertainty maximization technique outperforms its counterparts.
more »
« less
- Award ID(s):
- 2107450
- PAR ID:
- 10447131
- Date Published:
- Journal Name:
- 2022 IEEE Global Communications Conference
- Page Range / eLocation ID:
- 2776 to 2781
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)While Deep Reinforcement Learning has emerged as a de facto approach to many complex experience-driven networking problems, it remains challenging to deploy DRL into real systems. Due to the random exploration or half-trained deep neural networks during the online training process, the DRL agent may make unexpected decisions, which may lead to system performance degradation or even system crash. In this paper, we propose PnP-DRL, an offline-trained, plug and play DRL solution, to leverage the batch reinforcement learning approach to learn the best control policy from pre-collected transition samples without interacting with the system. After being trained without interaction with systems, our Plug and Play DRL agent will start working seamlessly, without additional exploration or possible disruption of the running systems. We implement and evaluate our PnP-DRL solution on a prevalent experience-driven networking problem, Dynamic Adaptive Streaming over HTTP (DASH). Extensive experimental results manifest that 1) The existing batch reinforcement learning method has its limits; 2) Our approach PnP-DRL significantly outperforms classical adaptive bitrate algorithms in average user Quality of Experience (QoE); 3) PnP-DRL, unlike the state-of-the-art online DRL methods, can be off and running without learning gaps, while achieving comparable performances.more » « less
-
The increasing penetration of renewable energy resources in distribution systems necessitates high-speed monitoring and control of voltage for ensuring reliable system operation. However, existing voltage control algorithms often make simplifying assumptions in their formulation, such as real-time availability of smart meter measurements (for monitoring), or real-time knowledge of every power injection information (for control). This paper leverages the recent advances made in high-speed state estimation for real-time unobservable distribution systems to formulate a deep reinforcement learning (DRL)-based control algorithm that utilizes the state estimates alone to control the voltage of the entire system. The results obtained for a modified (renewable-rich) IEEE 34-node distribution feeder indicate that the proposed approach excels in monitoring and controlling voltage of active distribution systems.more » « less
-
Mostafa Sahraei-Ardakani; Mingxi Liu (Ed.)This paper explores the application of deep reinforcement learning (DRL) to create a coordinating mechanism between synchronous generators (SGs) and distributed energy resources (DERs) for improved primary frequency regulation. Renewable energy sources, such as wind and solar, may be used to aid in frequency regulation of the grid. Without proper coordination between the sources, however, the participation only results in a delay of SG governor response and frequency deviation. The proposed DRL application uses a deep deterministic policy gradient (DDPG) agent to create a generalized coordinating signal for DERs. The coordinating signal communicates the degree of distributed participation to the SG governor, resolving delayed governor response and reducing system rate of change of frequency (ROCOF). The validity of the coordinating signal is presented with a single-machine finite bus system. The use of DRL for signal creation is explored in an under-frequency event. While further exploration is needed for validation in large systems, the development of this concept shows promising results towards increased power grid stabilization.more » « less
-
The emerging unmanned aerial vehicle (UAV) such as a quadcopter offers a reliable, controllable, and flexible way of ferrying information from energy harvesting powered IoT devices in remote areas to the IoT edge servers. Nonetheless, the employment of UAVs faces a major challenge which is the limited fly range due to the necessity for recharging, especially when the charging stations are situated at considerable distances from the monitoring area, resulting in inefficient energy usage. To mitigate these challenges, we proposed to place multiple charging stations in the field and each is equipped with a powerful energy harvester and acting as a cluster head to collect data from the sensor node under its jurisdiction. In this way, the UAV can remain in the field continuously and get the data while charging. However, the intermittent and unpredictable nature of energy harvesting can render stale or even obsolete information stored at cluster heads. To tackle this issue, in this work, we proposed a Deep Reinforcement Learning (DRL) based path planning for UAVs. The DRL agent will gather the global information from the UAV to update its input environmental states for outputting the location of the next stop to optimize the overall age of information of the whole network. The experiments show that the proposed DDQN can significantly reduce the age of information (AoI) by 3.7% reliably compared with baseline techniques.more » « less