skip to main content


Title: Crop Field Boundary Delineation using Historical Crop Rotation Pattern
GIS data layer on crop field boundary has many applications in agricultural research, ecosystem study, crop monitoring, and land management. Crop field boundary mapping through field survey is not time and cost effective for vast agriculture areas. Onscreen digitization on fine-resolution satellite image is also labor-intensive and error-prone. The recent development in image segmentation based on their spectral characteristics is promising for cropland boundary detection. However, processing of large volume multi-band satellite images often required high-performance computation systems. This study utilized crop rotation information for the delineation of field boundaries. In this study, crop field boundaries of Iowa in the United States are extracted using multi-year (2007-2018) CDL data. The process is simple compared to boundary extraction from multi-date remote sensing data. Although this process was unable to distinguish some adjacent fields, the overall accuracy is promising. Utilization of advanced geoprocessing algorithms and tools on polygon correction may improve the result significantly. Extracted field boundaries are validated by superimposing on fine resolution Google Earth images. The result shows that crop field boundaries can easily be extracted with reasonable accuracy using crop rotation information.  more » « less
Award ID(s):
1739705
PAR ID:
10193695
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
2019 8th International Conference on Agro-Geoinformatics (Agro-Geoinformatics)
Page Range / eLocation ID:
1 to 5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Crop type information at the field level is vital for many types of research and applications. The United States Department of Agriculture (USDA) provides information on crop types for US cropland as a Cropland Data Layer (CDL). However, CDL is only available at the end of the year after the crop growing season. Therefore, CDL is unable to support in-season research and decision-making regarding crop loss estimation, yield estimation, and grain pricing. The USDA mostly relies on field survey and farmers’ reports for the ground truth to train image classification models, which is one of the major reasons for the delayed release of CDL. This research aims to use trusted pixels as ground truth to train classification models. Trusted pixels are pixels which follow a specific crop rotation pattern. These trusted pixels are used to train image classification models for the classification of in-season Landsat images to identify major crop types. Six different classification algorithms are investigated and tested to select the best algorithm for this study. The Random Forest algorithm stands out among selected algorithms. This study classified Landsat scenes between May and mid-August for Iowa. The overall agreements of classification results with CDL in 2017 are 84%, 94%, and 96% for May, June, and July, respectively. The classification accuracies have been assessed through 683 ground truth data points collected from the fields. The overall accuracies of single date multi-band image classification are 84%, 89% and 92% for May, June, and July, respectively. The result also shows higher accuracy (94–95%) can be achieved through multi-date image classification compared to single date image classification. 
    more » « less
  2. Mapping crop types and land cover in smallholder farming systems in sub-Saharan Africa remains a challenge due to data costs, high cloud cover, and poor temporal resolution of satellite data. With improvement in satellite technology and image processing techniques, there is a potential for integrating data from sensors with different spectral characteristics and temporal resolutions to effectively map crop types and land cover. In our Malawi study area, it is common that there are no cloud-free images available for the entire crop growth season. The goal of this experiment is to produce detailed crop type and land cover maps in agricultural landscapes using the Sentinel-1 (S-1) radar data, Sentinel-2 (S-2) optical data, S-2 and PlanetScope data fusion, and S-1 C2 matrix and S-1 H/α polarimetric decomposition. We evaluated the ability to combine these data to map crop types and land cover in two smallholder farming locations. The random forest algorithm, trained with crop and land cover type data collected in the field, complemented with samples digitized from Google Earth Pro and DigitalGlobe, was used for the classification experiments. The results show that the S-2 and PlanetScope fused image + S-1 covariance (C2) matrix + H/α polarimetric decomposition (an entropy-based decomposition method) fusion outperformed all other image combinations, producing higher overall accuracies (OAs) (>85%) and Kappa coefficients (>0.80). These OAs represent a 13.53% and 11.7% improvement on the Sentinel-2-only (OAs < 80%) experiment for Thimalala and Edundu, respectively. The experiment also provided accurate insights into the distribution of crop and land cover types in the area. The findings suggest that in cloud-dense and resource-poor locations, fusing high temporal resolution radar data with available optical data presents an opportunity for operational mapping of crop types and land cover to support food security and environmental management decision-making. 
    more » « less
  3. Pixel-level fusion of satellite images coming from multiple sensors allows for an improvement in the quality of the acquired data both spatially and spectrally. In particular, multispectral and hyperspectral images have been fused to generate images with a high spatial and spectral resolution. In literature, there are several approaches for this task, nonetheless, those techniques still present a loss of relevant spatial information during the fusion process. This work presents a multi scale deep learning model to fuse multispectral and hyperspectral data, each with high-spatial-and-low-spectral resolution (HSaLS) and low-spatial-and-high-spectral resolution (LSaHS) respectively. As a result of the fusion scheme, a high-spatial-and-spectral resolution image (HSaHS) can be obtained. In order of accomplishing this result, we have developed a new scalable high spatial resolution process in which the model learns how to transition from low spatial resolution to an intermediate spatial resolution level and finally to the high spatial-spectral resolution image. This step-by-step process reduces significantly the loss of spatial information. The results of our approach show better performance in terms of both the structural similarity index and the signal to noise ratio. 
    more » « less
  4. Messinger, David W. ; Velez-Reyes, Miguel (Ed.)
    Recently, multispectral and hyperspectral data fusion models based on deep learning have been proposed to generate images with a high spatial and spectral resolution. The general objective is to obtain images that improve spatial resolution while preserving high spectral content. In this work, two deep learning data fusion techniques are characterized in terms of classification accuracy. These methods fuse a high spatial resolution multispectral image with a lower spatial resolution hyperspectral image to generate a high spatial-spectral hyperspectral image. The first model is based on a multi-scale long short-term memory (LSTM) network. The LSTM approach performs the fusion using a multiple step process that transitions from low to high spatial resolution using an intermediate step capable of reducing spatial information loss while preserving spectral content. The second fusion model is based on a convolutional neural network (CNN) data fusion approach. We present fused images using four multi-source datasets with different spatial and spectral resolutions. Both models provide fused images with increased spatial resolution from 8m to 1m. The obtained fused images using the two models are evaluated in terms of classification accuracy on several classifiers: Minimum Distance, Support Vector Machines, Class-Dependent Sparse Representation and CNN classification. The classification results show better performance in both overall and average accuracy for the images generated with the multi-scale LSTM fusion over the CNN fusion 
    more » « less
  5. Existing image-to-image transformation approaches primarily focus on synthesizing visually pleasing data. Generating images with correct identity labels is challenging yet much less explored. It is even more challenging to deal with image transformation tasks with large deformation in poses, viewpoints, or scales while preserving the identity, such as face rotation and object viewpoint morphing. In this paper, we aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image, which can thereby benefit the subsequent fine-grained image recognition and few-shot learning tasks. The generated images, transformed with large geometric deformation, do not necessarily need to be of high visual quality but are required to maintain as much identity information as possible. To this end, we adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image. In order to preserve the fine-grained contextual details of the input image during the deformable transformation, a constrained nonalignment connection method is proposed to construct learnable highways between intermediate convolution blocks in the generator. Moreover, an adaptive identity modulation mechanism is proposed to transfer the identity information into the output image effectively. Extensive experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models, and as a result significantly boosts the visual recognition performance in fine-grained few-shot learning. 
    more » « less