skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: An Attack-Resilient and Energy-Adaptive Monitoring System for Smart Farms
In this work, we propose an energy-adaptive moni-toring system for a solar sensor-based smart animal farm (e.g., cattle). The proposed smart farm system aims to maintain high-quality monitoring services by solar sensors with limited and fluctuating energy against a full set of cyberattack behaviors including false data injection, message dropping, or protocol non-compliance. We leverage Subjective Logic (SL) as the belief model to consider different types of uncertainties in opinions about sensed data. We develop two Deep Reinforcement Learning (D RL) schemes leveraging the design concept of uncertainty maximization in SL for DRL agents running on gateways to collect high-quality sensed data with low uncertainty and high freshness. We assess the performance of the proposed energy-adaptive smart farm system in terms of accumulated reward, monitoring error, system overload, and battery maintenance level. We compare the performance of the two DRL schemes developed (i.e., multi-agent deep Q-Iearning, MADQN, and multi-agent proximal policy optimization, MAPPO) with greedy and random baseline schemes in choosing the set of sensed data to be updated to collect high-quality sensed data to achieve resilience against attacks. Our experiments demonstrate that MAPPO with the uncertainty maximization technique outperforms its counterparts.  more » « less
Award ID(s):
2107450
PAR ID:
10447131
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2022 IEEE Global Communications Conference
Page Range / eLocation ID:
2776 to 2781
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    While Deep Reinforcement Learning has emerged as a de facto approach to many complex experience-driven networking problems, it remains challenging to deploy DRL into real systems. Due to the random exploration or half-trained deep neural networks during the online training process, the DRL agent may make unexpected decisions, which may lead to system performance degradation or even system crash. In this paper, we propose PnP-DRL, an offline-trained, plug and play DRL solution, to leverage the batch reinforcement learning approach to learn the best control policy from pre-collected transition samples without interacting with the system. After being trained without interaction with systems, our Plug and Play DRL agent will start working seamlessly, without additional exploration or possible disruption of the running systems. We implement and evaluate our PnP-DRL solution on a prevalent experience-driven networking problem, Dynamic Adaptive Streaming over HTTP (DASH). Extensive experimental results manifest that 1) The existing batch reinforcement learning method has its limits; 2) Our approach PnP-DRL significantly outperforms classical adaptive bitrate algorithms in average user Quality of Experience (QoE); 3) PnP-DRL, unlike the state-of-the-art online DRL methods, can be off and running without learning gaps, while achieving comparable performances. 
    more » « less
  2. Optimal sensor placement is critical for enhancing the effectiveness of monitoring dynamical systems. Deterministic solutions do not reflect the effects of input and parameter uncertainty on the sensor placement. Using a Markov decision process (MDP) and a sensor placement agent, this study proposes a stochastic approach to maximize the gain from placing a fixed number of sensors within the system. Utilizing Deep Reinforcement Learning (DRL), the agent is trained by collecting interactive samples within the environment, which uses an information-theoretic reward function that is a measure, based on Shannon entropy, of the identifiability of the model parameters. The goal of the agent is to maximize its expected future reward by selecting, at each step, the action (placing a sensor) that provides the most information. This framework is validated using a synthetic model of a base-isolated structure. To consider the existing uncertainty in the parameters, a prior probability distribution is chosen (e.g., based on expert judgement or preliminary study) for each model parameter. Further, a probabilistic model for the input is used to reflect input variability. In a Deep Q-network, a type of DRL algorithm, the agent learns a mapping from states (i.e., sensor configurations) to the "quality" of each action at that state, called "Q-values". This network is trained using samples of state, action, and reward by interacting with the environment. The modular property of the framework and the function approximation used in this study makes it scalable to complex real-world applications of sensor placement problems in the presence of uncertainties. 
    more » « less
  3. The increasing penetration of renewable energy resources in distribution systems necessitates high-speed monitoring and control of voltage for ensuring reliable system operation. However, existing voltage control algorithms often make simplifying assumptions in their formulation, such as real-time availability of smart meter measurements (for monitoring), or real-time knowledge of every power injection information (for control). This paper leverages the recent advances made in high-speed state estimation for real-time unobservable distribution systems to formulate a deep reinforcement learning (DRL)-based control algorithm that utilizes the state estimates alone to control the voltage of the entire system. The results obtained for a modified (renewable-rich) IEEE 34-node distribution feeder indicate that the proposed approach excels in monitoring and controlling voltage of active distribution systems. 
    more » « less
  4. Mostafa Sahraei-Ardakani; Mingxi Liu (Ed.)
    This paper explores the application of deep reinforcement learning (DRL) to create a coordinating mechanism between synchronous generators (SGs) and distributed energy resources (DERs) for improved primary frequency regulation. Renewable energy sources, such as wind and solar, may be used to aid in frequency regulation of the grid. Without proper coordination between the sources, however, the participation only results in a delay of SG governor response and frequency deviation. The proposed DRL application uses a deep deterministic policy gradient (DDPG) agent to create a generalized coordinating signal for DERs. The coordinating signal communicates the degree of distributed participation to the SG governor, resolving delayed governor response and reducing system rate of change of frequency (ROCOF). The validity of the coordinating signal is presented with a single-machine finite bus system. The use of DRL for signal creation is explored in an under-frequency event. While further exploration is needed for validation in large systems, the development of this concept shows promising results towards increased power grid stabilization. 
    more » « less
  5. The amount of greenhouse gas emissions from streetlights is equivalent to 2.6 million cars with as many as 26 million streetlights in the United States. The proposed IoT controller integrates sensors to make these streetlights as hubs for smart environment monitoring with effective energy usage. Conservation of energy is one of the main concerns in the modern era, and energy coming from the sun can be utilized efficiently alongside a smart streetlight management system instead of conventional streetlight management techniques. Additionally, with streetlights being present throughout a city, the opportunity to collect city-wide weather data is proposed. To this end, a solar-powered IoT-based smart street lighting and environmental monitoring system is proposed. The proposed energy-efficient IoT-based system uses a microcontroller to control light-emitting diode (LED) streetlights depending on lighting conditions and vehicle detection, ensuring that the streetlights can be turned on when needed. 
    more » « less