نوع مقاله : مقاله پژوهشی
نویسندگان
1 گروه علوم و مهدسی آب، دانشکده کشاورزی و منابع طبیعی، دانشگاه بین المللی امام خمینی (ره)، قزوین، ایران
2 گروه علوم و مهندسی آب، دانشکده کشاورزی و منابع طبیعی، دانشگاه بینالمللی امام خمینی (ره)، قزوین، ایران
چکیده
کلیدواژهها
موضوعات
عنوان مقاله [English]
نویسندگان [English]
Canopy cover fraction is one of the most important criteria for investigating the crop growth and yield and is one of the input data of most plant models. Canopy cover fraction is an easier measurement than the other methods which id depended on field observations or image processing beyond the visible spectrum. In this study, drone images of the sugar beet field in the cropping season of 2015-2016 and on the four dates from late May to late June at the Lindau center of plant sciences research, Switzerland were used. The research was conducted by six plant discrimination indices and three distinct thresholding algorithms to segment sugar beet vegetation. Then, among the 18 investigated methods, the best 6 methods were evaluated by comparing their values with the ground truth values in 30 different regions of the farm and on four dates from the beginning of the four-leaf stage to the end of the six-leaf stage. Results showed that the ExG, GLI, and RGBVI indices, in combination with the Otsu and Ridler-Calvard thresholding algorithms, demonstrate optimal performance in vegetation segmentation. The evaluation statistics of NRMSE and R2 for the ExG&Otsu method as the most accurate method were obtained as 5.13 % and 0.96, respectively. Conversely, the RGBVI&RC method exhibits the least accuracy in the initial evaluation, with NRMSE and R2 values of 8.18 % and 0.87, respectively. Comparative analysis of statistical indicators showed that the ExG&Otsu and ExG&RC methods with similar performance, displaying the highest correlation with ground truths. Additionally, the GLI&Otsu method consistently demonstrates the lowest error compared to ground truths.
کلیدواژهها [English]
Assessment of canopy cover fraction in sugar beet field using unmanned aerial vehicle imagery and different image segmentation methods
EXTENDED ABSTRACT
Canopy cover fraction (CCF) is the fraction of crop canopies projected onto the ground surface. CCF is one of the most important criteria for investigating the crop growth and yield and is one of the input data of most plant models. Unlike measurement methods relying on field observations or image processing beyond the visible spectrum, the fraction of canopy cover can be conveniently estimated within the visible spectrum. CCF can be applied in the fields of controlling plant growth conditions, identification of the leaves disease, monitoring the status of necessary nutrients and controlling the plant stress symptoms such as drought stress, nutrient deficiency and weed stress. Nowadays segmentation methods of digital images has achieved an important role in the part of image processing in agriculture. Segmentation, mainly means the discrimination of the leaves pixels (green body as foreground) from the pixels of the background. In this instance, different techniques have been employed to segment canopy cover fractions (CCF). One widely used approach involves combining canopy cover discrimination indices with thresholding algorithms. In this study, 18 different methods were applied, comprising six indices and three thresholding algorithms, across four dates and 30 regions within a UAV image of a sugar beet field. Utilizing discrete spatial analysis enables a thorough examination of factors affecting canopy cover estimation, including variations in light intensity, additional phenomena, and other influencing factors.
The dataset of drone images captured by the University of Bonn of the sugar beet field during the 2015-16 cropping season was utilized. These data were prepared using DJI MATRICE 100 drone and with dimensions of 4000 x 2000 pixels in the field of Lindau Plant Research Institute located in Switzerland (47.45°N, 8.68°E). Canopy cover segmentation was done using ExG, ExGR, ExGB, GLI, VARI, RGBVI indices and Otsu, Ridler-Calvard and Two-Peaks thresholding algorithms.
Different segmentation methods were assessed for accuracy through comparison with ground truth images produced using Envi 5.6 software. Initially, 18 methods were evaluated across all growth stages and in 30 regions of the image, utilizing NRMSE and R2 statistics. The top six methods (ExG&Otsu, ExG&Ridler-Calvard, GLI&Otsu, GLI&Ridler-Calvard, RGBVI&Otsu, and RGBVI&Ridler-Calvard) were then selected for detailed analysis on each of the four dates. The study revealed that the choice of indices has a greater impact on method accuracy compared to thresholding algorithms. This is due to the limitation and weakness of some indices in conditions of very high light intensity (such as light reflection) or very low light intensity (such as shadows). Among the indices, three indices ExG (NRMSE=5.13, R2=0.96), GLI (NRMSE=6.74, R2=0.92) and RGBVI (NRMSE=8.15, R2=0.87) showed better performances than ExGR (NRMSE=16.89, R2=0.76), ExGB (NRMSE=10.74, R2=0.77) and VARI (NRMSE=12.87, R2=0.89).
Such an accurate, fast and automated method for estimating CCF from digital images is potentially beneficial for many applications, including crop modelling. Unlike direct field methods, indirect methods such as segmentation image processing method are not destructive, save time and resources, and are less expensive. Selecting a suitable greenness discrimination index for segmentation is crucial. It's important to carefully consider both the strengths and limitations of the chosen index for future research endeavors.