The monumental scale agricultural infrastructure systems built by Andean peoples during pre-Hispanic times have enabled intensive agriculture in the high-relief, arid/semi-arid landscape of the Southern Peruvian Andes. Large tracts of these labor-intensive systems have been abandoned, however, owing in large measure to a range of demographic, economic, and political crises precipitated by the Spanish invasion of the 16th century CE. This research seeks to better understand the dynamics of agricultural intensification and deintensification in the Andes by inventorying through the semantic segmentation of active and abandoned agricultural fields in satellite imagery across approximately 77,000 km2 of the Southern Peruvian Highlands. While manual digitization of agricultural fields in satellite imagery is time-consuming and labor-intensive, deep learning-based semantic segmentation makes it possible to map and classify en masse Andean agricultural infrastructure. Using high resolution satellite imagery, training and validation data were manually produced in distributed sample areas and were used to transfer-train a convolutional neural network for semantic segmentation. The resulting dataset was compared to manual surveys of the region and results suggest that deep learning can generate larger and more accurate datasets than those generated by hand.
more »
« less
GRID: A Python Package for Field Plot Phenotyping Using Aerial Images
Aerial imagery has the potential to advance high-throughput phenotyping for agricultural field experiments. This potential is currently limited by the difficulties of identifying pixels of interest (POI) and performing plot segmentation due to the required intensive manual operations. We developed a Python package, GRID (GReenfield Image Decoder), to overcome this limitation. With pixel-wise K-means cluster analysis, users can specify the number of clusters and choose the clusters representing POI. The plot grid patterns are automatically recognized by the POI distribution. The local optima of POI are initialized as the plot centers, which can also be manually modified for deletion, addition, or relocation. The segmentation of POI around the plot centers is initialized by automated, intelligent agents to define plot boundaries. A plot intelligent agent negotiates with neighboring agents based on plot size and POI distributions. The negotiation can be refined by weighting more on either plot size or POI density. All adjustments are operated in a graphical user interface with real-time previews of outcomes so that users can refine segmentation results based on their knowledge of the fields. The final results are saved in text and image files. The text files include plot rows and columns, plot size, and total plot POI. The image files include displays of clusters, POI, and segmented plots. With GRID, users are completely liberated from the labor-intensive task of manually drawing plot lines or polygons. The supervised automation with GRID is expected to enhance the efficiency of agricultural field experiments.
more »
« less
- Award ID(s):
- 1661348
- PAR ID:
- 10176190
- Date Published:
- Journal Name:
- Remote Sensing
- Volume:
- 12
- Issue:
- 11
- ISSN:
- 2072-4292
- Page Range / eLocation ID:
- 1697
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
IntroductionComputer vision and deep learning (DL) techniques have succeeded in a wide range of diverse fields. Recently, these techniques have been successfully deployed in plant science applications to address food security, productivity, and environmental sustainability problems for a growing global population. However, training these DL models often necessitates the large-scale manual annotation of data which frequently becomes a tedious and time-and-resource- intensive process. Recent advances in self-supervised learning (SSL) methods have proven instrumental in overcoming these obstacles, using purely unlabeled datasets to pre-train DL models. MethodsHere, we implement the popular self-supervised contrastive learning methods of NNCLR Nearest neighbor Contrastive Learning of visual Representations) and SimCLR (Simple framework for Contrastive Learning of visual Representations) for the classification of spatial orientation and segmentation of embryos of maize kernels. Maize kernels are imaged using a commercial high-throughput imaging system. This image data is often used in multiple downstream applications across both production and breeding applications, for instance, sorting for oil content based on segmenting and quantifying the scutellum’s size and for classifying haploid and diploid kernels. Results and discussionWe show that in both classification and segmentation problems, SSL techniques outperform their purely supervised transfer learning-based counterparts and are significantly more annotation efficient. Additionally, we show that a single SSL pre-trained model can be efficiently finetuned for both classification and segmentation, indicating good transferability across multiple downstream applications. Segmentation models with SSL-pretrained backbones produce DICE similarity coefficients of 0.81, higher than the 0.78 and 0.73 of those with ImageNet-pretrained and randomly initialized backbones, respectively. We observe that finetuning classification and segmentation models on as little as 1% annotation produces competitive results. These results show SSL provides a meaningful step forward in data efficiency with agricultural deep learning and computer vision.more » « less
-
This paper presents a tool-pose-informed variable center morphological polar transform to enhance segmentation of endoscopic images. The representation, while not loss-less, transforms rigid tool shapes into morphologies consistently more rectangular that may be more amenable to image segmentation networks. The proposed method was evaluated using the U-Net convolutional neural network, and the input images from endoscopy were represented in one of the four different coordinate formats (1) the original rectangular image representation, (2) the morphological polar coordinate transform, (3) the proposed variable center transform about the tool-tip pixel and (4) the proposed variable center transform about the tool vanishing point pixel. Previous work relied on the observations that endoscopic images typically exhibit unused border regions with content in the shape of a circle (since the image sensor is designed to be larger than the image circle to maximize available visual information in the constrained environment) and that the region of interest (ROI) was most ideally near the endoscopic image center. That work sought an intelligent method for, given an input image, carefully selecting between methods (1) and (2) for best image segmentation prediction. In this extension, the image center reference constraint for polar transformation in method (2) is relaxed via the development of a variable center morphological transformation. Transform center selection leads to different spatial distributions of image loss, and the transform-center location can be informed by robot kinematic model and endoscopic image data. In particular, this work is examined using the tool-tip and tool vanishing point on the image plane as candidate centers. The experiments were conducted for each of the four image representations using a data set of 8360 endoscopic images from real sinus surgery. The segmentation performance was evaluated with standard metrics, and some insight about loss and tool location effects on performance are provided. Overall, the results are promising, showing that selecting a transform center based on tool shape features using the proposed method can improve segmentation performance.more » « less
-
A connectivity graph of neurons at the resolution of single synapses provides scientists with a tool for understanding the nervous system in health and disease. Recent advances in automatic image segmentation and synapse prediction in electron microscopy (EM) datasets of the brain have made reconstructions of neurons possible at the nanometer scale. However, automatic segmentation sometimes struggles to segment large neurons correctly, requiring human effort to proofread its output. General proofreading involves inspecting large volumes to correct segmentation errors at the pixel level, a visually intensive and time-consuming process. This paper presents the design and implementation of an analytics framework that streamlines proofreading, focusing on connectivity-related errors. We accomplish this with automated likely-error detection and synapse clustering that drives the proofreading effort with highly interactive 3D visualizations. In particular, our strategy centers on proofreading the local circuit of a single cell to ensure a basic level of completeness. We demonstrate our framework’s utility with a user study and report quantitative and subjective feedback from our users. Overall, users find the framework more efficient for proofreading, understanding evolving graphs, and sharing error correction strategies.more » « less
-
Current scientific experiments frequently involve control of specialized instruments (e.g., scanning electron mi- croscopes), image data collection from those instruments, and transfer of the data for processing at simulation centers. This process requires a “human-in-the-loop” to perform those tasks manually, which besides requiring a lot of effort and time, could lead to inconsistencies or errors. Thus, it is essential to have an automated system capable of performing remote instrumentation to intelligently control and collect data from the scientific instruments. In this paper, we propose a Remote Instrumentation Science Environment (RISE) for intelligent im- age analytics that provides the infrastructure to securely capture images, determine process parameters via machine learning, and provide experimental control actions via automation, under the premise of “human-on-the-loop”. The machine learning in RISE aids an iterative discovery process to assist researchers to tune instrument settings to improve the outcomes of experiments. Driven by two scientific use cases of image analytics pipelines, one in material science, and another in biomedical science, we show how RISE automation leverages a cutting-edge integration of cloud computing, on-premise HPC cluster, and a Python programming interface available on a microscope. Using web services, we implement RISE to perform automated image data collection/analysis guided by an intelligent agent to provide real-time feedback control of the microscope using the image analytics outputs. Our evaluation results show the benefits of RISE for researchers to obtain higher image analytics accuracy, save precious time in manually controlling the microscopes, while reducing errors in operating the instruments.more » « less
An official website of the United States government

