skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Optimizing Irrigation Efficiency using Deep Reinforcement Learning in the Field
Agricultural irrigation is a significant contributor to freshwater consumption. However, the current irrigation systems used in the field are not efficient. They rely mainly on soil moisture sensors and the experience of growers but do not account for future soil moisture loss. Predicting soil moisture loss is challenging because it is influenced by numerous factors, including soil texture, weather conditions, and plant characteristics. This article proposes a solution to improve irrigation efficiency, which is calledDRLIC(deep reinforcement learning for irrigation control).DRLICis a sophisticated irrigation system that uses deep reinforcement learning (DRL) to optimize its performance. The system employs a neural network, known as the DRL control agent, which learns an optimal control policy that considers both the current soil moisture measurement and the future soil moisture loss. We introduce an irrigation reward function that enables our control agent to learn from previous experiences. However, there may be instances in which the output of our DRL control agent is unsafe, such as irrigating too much or too little. To avoid damaging the health of the plants, we implement a safety mechanism that employs a soil moisture predictor to estimate the performance of each action. If the predicted outcome is deemed unsafe, we perform a relatively conservative action instead. To demonstrate the real-world application of our approach, we develop an irrigation system that comprises sprinklers, sensing and control nodes, and a wireless network. We evaluate the performance ofDRLICby deploying it in a testbed consisting of six almond trees. During a 15-day in-field experiment, we compare the water consumption ofDRLICwith a widely used irrigation scheme. Our results indicate thatDRLICoutperforms the traditional irrigation method by achieving water savings of up to 9.52%.  more » « less
Award ID(s):
2239458 2008837
PAR ID:
10536198
Author(s) / Creator(s):
;
Publisher / Repository:
ACM
Date Published:
Journal Name:
ACM Transactions on Sensor Networks
Volume:
20
Issue:
4
ISSN:
1550-4859
Page Range / eLocation ID:
1 to 34
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    While Deep Reinforcement Learning has emerged as a de facto approach to many complex experience-driven networking problems, it remains challenging to deploy DRL into real systems. Due to the random exploration or half-trained deep neural networks during the online training process, the DRL agent may make unexpected decisions, which may lead to system performance degradation or even system crash. In this paper, we propose PnP-DRL, an offline-trained, plug and play DRL solution, to leverage the batch reinforcement learning approach to learn the best control policy from pre-collected transition samples without interacting with the system. After being trained without interaction with systems, our Plug and Play DRL agent will start working seamlessly, without additional exploration or possible disruption of the running systems. We implement and evaluate our PnP-DRL solution on a prevalent experience-driven networking problem, Dynamic Adaptive Streaming over HTTP (DASH). Extensive experimental results manifest that 1) The existing batch reinforcement learning method has its limits; 2) Our approach PnP-DRL significantly outperforms classical adaptive bitrate algorithms in average user Quality of Experience (QoE); 3) PnP-DRL, unlike the state-of-the-art online DRL methods, can be off and running without learning gaps, while achieving comparable performances. 
    more » « less
  2. Bogomolov, S.; Parker, D. (Ed.)
    Safety is a critical concern for the next generation of autonomy that is likely to rely heavily on deep neural networks for perception and control. This paper proposes a method to repair unsafe ReLU DNNs in safety-critical systems using reachability analysis. Our repair method uses reachability analysis to calculate the unsafe reachable domain of a DNN, and then uses a novel loss function to construct its distance to the safe domain during the retraining process. Since subtle changes of the DNN parameters can cause unexpected performance degradation, we also present a minimal repair approach where the DNN deviation is minimized. Furthermore, we explore applications of our method to repair DNN agents in deep reinforcement learning (DRL) with seamless integration with learning algorithms. Our method is evaluated on the ACAS Xu benchmark and a rocket lander system against the state-of-the-art method ART. Experimental results show that our repair approach can generate provably safe DNNs on multiple safety specifications with negligible performance degradation, even in the absence of training data (Code is available online at https://github.com/Shaddadi/veritex.git). 
    more » « less
  3. Abstract Deep Reinforcement Learning (DRL) has shown promise for voltage control in power systems due to its speed and model‐free nature. However, learning optimal control policies through trial and error on a real grid is infeasible due to the mission‐critical nature of power systems. Instead, DRL agents are typically trained on a simulator, which may not accurately represent the real grid. This discrepancy can lead to suboptimal control policies and raises concerns for power system operators. In this paper, we revisit the problem of RL‐based voltage control and investigate how model inaccuracies affect the performance of the DRL agent. Extensive numerical experiments are conducted to quantify the impact of model inaccuracies on learning outcomes. Specifically, techniques that enable the DRL agent are focused on learning robust policies that can still perform well in the presence of model errors. Furthermore, the impact of the agent's decisions on the overall system loss are analyzed to provide additional insight into the control problem. This work aims to address the concerns of power system operators and make DRL‐based voltage control more practical and reliable. 
    more » « less
  4. Narodytska, Nina; Ruemmer, Philipp (Ed.)
    Deep reinforcement learning (DRL) is a powerful machine learning paradigm for generating agents that control autonomous systems. However, the “black box” nature of DRL agents limits their deployment in real-world safety-critical applications. A promising approach for providing strong guarantees on an agent's behavior is to use Neural Lyapunov Barrier (NLB) certifcates, which are learned functions over the system whose properties indirectly imply that an agent behaves as desired. However, NLB-based certifcates are typically diffcult to learn and even more diffcult to verify, especially for complex systems. In this work, we present a novel method for training and verifying NLB-based certifcates for discrete-time systems. Specifcally, we introduce a technique for certifcate composition, which simplifes the verifcation of highly-complex systems by strategically designing a sequence of certifcates. When jointly verifed with neural network verifcation engines, these certifcates provide a formal guarantee that a DRL agent both achieves its goals and avoids unsafe behavior. Furthermore, we introduce a technique for certifcate fltering, which signifcantly simplifes the process of producing formally verifed certifcates. We demonstrate the merits of our approach with a case study on providing safety and liveness guarantees for a DRL-controlled spacecraft. 
    more » « less
  5. We propose a deep reinforcement learning (DRL) methodology for the tracking, obstacle avoidance, and formation control of nonholonomic robots. By separating vision-based control into a perception module and a controller module, we can train a DRL agent without sophisticated physics or 3D modeling. In addition, the modular framework averts daunting retrains of an image-to-action end-to-end neural network, and provides flexibility in transferring the controller to different robots. First, we train a convolutional neural network (CNN) to accurately localize in an indoor setting with dynamic foreground/background. Then, we design a new DRL algorithm named Momentum Policy Gradient (MPG) for continuous control tasks and prove its convergence. We also show that MPG is robust at tracking varying leader movements and can naturally be extended to problems of formation control. Leveraging reward shaping, features such as collision and obstacle avoidance can be easily integrated into a DRL controller. 
    more » « less