Document Type : Research Paper
Authors
1 Department of Water Engineering, Faculty of Agriculture, University of Zanjan, Zanjan, Iran.
2 Water Sciences and Engineering Department, Faculty of Agriculture, Bu-Ali Sina University, Hamedan, Iran.
3 Department of Water Engineering, Faculty of Agriculture, University of Zanjan , Zanjan, Iran
Abstract
Keywords
Main Subjects
EXTENDED ABSTRACT
Many people worldwide experience water scarcity in their daily lives. Therefore, water resource management is a serious challenge for communities. A significant portion of water resource consumption in a basin is used in agriculture and irrigation and drainage networks, which act like the human body's veins, serving as the main arteries for water distribution and conveyance in an agricultural area. Improper operation of these networks leads to significant water losses, making their efficient management crucial for substantially reducing these losses.
This research develops an inverse reinforcement learning algorithm integrated with a hydrodynamic canal simulation model to address this issue.
To achieve the objective, a reward function is learned based on the different actions of the learning agent (water control structures such as weirs and intakes) in various states (obtained based on an expert's experience) in the inverse reinforcement learning algorithm. Then, using this reward function, the actions corresponding to different states are extracted. Using operational data and considering various scenarios, operational patterns in the E1-R1 canal of the Dez network were extracted.
The findings of this research showed that in most scenarios, the calculated values for the delivery efficiency index are close to one. The key parameters affecting the calculation of this index include the number of off-takes, the required (or requested) discharge, and the actual delivered discharge. According to the standard range of Molden and Gates performance evaluation indices for delivery efficiency, if the values fall within the range of 0.85 to 1, the water is delivered to consumers with higher efficiency, effectively minimizing water losses or excess delivery. In this study, based on the obtained results, the minimum and maximum delivery efficiencies were 0.97 and 1, respectively, which fall within the standard range and indicate a favorable performance. Overall, the efficiency values for Off-takes No. 1, 2, 5, and 6 were in their ideal state across all scenarios. Additionally, the average delivery efficiency for Off-takes No. 1 to 6 was 1, 1, 0.98, 0.98, 1, and 1, respectively.
The cumulative absolute error values were all below 10%. The minimum and maximum cumulative absolute errors were 0.5% and 2.4%, respectively, with an average error of 1.65% across all scenarios - all indicating good performance. The cumulative absolute error values also remained below 10%. The minimum and maximum errors were 1% and 9.4%, respectively, with an average error of 4.34% across all scenarios. Overall, these results similarly fall within the acceptable performance range.
Based on the various defined scenarios and the results of simulations and learning, all evaluation indices fell within the "good" performance class. This confirms the successful performance of the inverse reinforcement learning algorithm and validates the learned reward function.
The evaluation results for water depth under the inverse reinforcement learning method demonstrated that the model consistently maintained water level variations within the permissible depth range across all scenarios. In all simulated scenarios, the depth remained within the allowable limits, and the average depth fluctuations were centered around the target depth.
All authors contributed equally to the conceptualization of the article and writing of the original and subsequent drafts.
Data available on request from the corresponding author.
The author declares no conflict of interest.