Graph convolutional neural network architectures combine feature extraction and convolutional layers for hyperspectral image classification. An adaptive neighborhood aggregation method based on statistical variance integrating the spatial information along with the spectral signature of the pixels is proposed for improving graph convolutional network classification of hyperspectral images. The spatial-spectral information is integrated into the adjacency matrix and processed by a single-layer graph convolutional network. The algorithm employs an adaptive neighborhood selection criteria conditioned by the class it belongs to. Compared to fixed window-based feature extraction, this method proves effective in capturing the spectral and spatial features with variable pixel neighborhood sizes. The experimental results from the Indian Pines, Houston University, and Botswana Hyperion hyperspectral image datasets show that the proposed AN-GCN can significantly improve classification accuracy. For example, the overall accuracy for Houston University data increases from 81.71% (MiniGCN) to 97.88% (AN-GCN). Furthermore, the AN-GCN can classify hyperspectral images of rice seeds exposed to high day and night temperatures, proving its efficacy in discriminating the seeds under increased ambient temperature treatments.
more »
« less
A Deep Learning Framework for Processing and Classification of Hyperspectral Rice Seed Images Grown under High Day and Night Temperatures
A framework combining two powerful tools of hyperspectral imaging and deep learning for the processing and classification of hyperspectral images (HSI) of rice seeds is presented. A seed-based approach that trains a three-dimensional convolutional neural network (3D-CNN) using the full seed spectral hypercube for classifying the seed images from high day and high night temperatures, both including a control group, is developed. A pixel-based seed classification approach is implemented using a deep neural network (DNN). The seed and pixel-based deep learning architectures are validated and tested using hyperspectral images from five different rice seed treatments with six different high temperature exposure durations during day, night, and both day and night. A stand-alone application with Graphical User Interfaces (GUI) for calibrating, preprocessing, and classification of hyperspectral rice seed images is presented. The software application can be used for training two deep learning architectures for the classification of any type of hyperspectral seed images. The average overall classification accuracy of 91.33% and 89.50% is obtained for seed-based classification using 3D-CNN for five different treatments at each exposure duration and six different high temperature exposure durations for each treatment, respectively. The DNN gives an average accuracy of 94.83% and 91% for five different treatments at each exposure duration and six different high temperature exposure durations for each treatment, respectively. The accuracies obtained are higher than those presented in the literature for hyperspectral rice seed image classification. The HSI analysis presented here is on the Kitaake cultivar, which can be extended to study the temperature tolerance of other rice cultivars.
more »
« less
- Award ID(s):
- 1736192
- PAR ID:
- 10507917
- Publisher / Repository:
- Sensors
- Date Published:
- Journal Name:
- Sensors
- Volume:
- 23
- Issue:
- 9
- ISSN:
- 1424-8220
- Page Range / eLocation ID:
- 4370
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Agaian, Sos S.; Jassim, Sabah A.; DelMarco, Stephen P.; Asari, Vijayan K. (Ed.)Neural networks have emerged to be the most appropriate method for tackling the classification problem for hyperspectral images (HIS). Convolutional neural networks (CNNs), being the current state-of-art for various classification tasks, have some limitations in the context of HSI. These CNN models are very susceptible to overfitting because of 1) lack of availability of training samples, 2) large number of parameters to fine-tune. Furthermore, the learning rates used by CNN must be small to avoid vanishing gradients, and thus the gradient descent takes small steps to converge and slows down the model runtime. To overcome these drawbacks, a novel quaternion based hyperspectral image classification network (QHIC Net) is proposed in this paper. The QHIC Net can model both the local dependencies between the spectral channels of a single-pixel and the global structural relationship describing the edges or shapes formed by a group of pixels, making it suitable for HSI datasets that are small and diverse. Experimental results on three HSI datasets demonstrate that the QHIC Net performs on par with the traditional CNN based methods for HSI Classification with a far fewer number of parameters. Keywords: Classification, deep learning, hyperspectral imaging, spectral-spatial feature learningmore » « less
-
Messinger, David W.; Velez-Reyes, Miguel (Ed.)Recent advances in data fusion provide the capability to obtain enhanced hyperspectral data with high spatial and spectral information content, thus allowing for an improved classification accuracy. Although hyperspectral image classification is a highly investigated topic in remote sensing, each classification technique presents different advantages and disadvantages. For example; methods based on morphological filtering are particularly good at classifying human-made structures with basic geometrical spatial shape, like houses and buildings. On the other hand, methods based on spectral information tend to perform better classification in natural scenery with more shape diversity such as vegetation and soil areas. Even more, for those classes with mixed pixels, small training data or objects with similar re ectance values present a higher challenge to obtain high classification accuracy. Therefore, it is difficult to find just one technique that provides the highest accuracy of classification for every class present in an image. This work proposes a decision fusion approach aiming to increase classification accuracy of enhanced hyperspectral images by integrating the results of multiple classifiers. Our approach is performed in two-steps: 1) the use of machine learning algorithms such as Support Vector Machines (SVM), Deep Neural Networks (DNN) and Class-dependent Sparse Representation will generate initial classification data, then 2) the decision fusion scheme based on a Convolutional Neural Network (CNN) will integrate all the classification results into a unified classification rule. In particular, the CNN receives as input the different probabilities of pixel values from each implemented classifier, and using a softmax activation function, the final decision is estimated. We present results showing the performance of our method using different hyperspectral image datasets.more » « less
-
Hyperspectral cameras collect detailed spectral information at each image pixel, contributing to the identification of image features. The rich spectral content of hyperspectral imagery has led to its application in diverse fields of study. This study focused on cloud classification using a dataset of hyperspectral sky images captured by a Resonon PIKA XC2 camera. The camera records images using 462 spectral bands, ranging from 400 to 1000 nm, with a spectral resolution of 1.9 nm. Our preliminary/unlabeled dataset comprised 33 parent hyperspectral images (HSI), each a substantial unlabeled image measuring 4402-by-1600 pixels. With the meteorological expertise within our team, we manually labeled pixels by extracting 10 to 20 sample patches from each parent image, each patch consisting of a 50-by-50 pixel field. This process yielded a collection of 444 patches, each categorically labeled into one of seven cloud and sky condition categories. To embed the inherent data structure while classifying individual pixels, we introduced an innovative technique to boost classification accuracy by incorporating patch-specific information into each pixel’s feature vector. The posterior probabilities generated by these classifiers, which capture the unique attributes of each patch, were subsequently concatenated with the pixel’s original spectral data to form an augmented feature vector. We then applied a final classifier to map the augmented vectors to the seven cloud/sky categories. The results compared favorably to the baseline model devoid of patch-origin embedding, showing that incorporating the spatial context along with the spectral information inherent in hyperspectral images enhances the classification accuracy in hyperspectral cloud classification. The dataset is available on IEEE DataPort.more » « less
-
Abstract Colorectal cancer is one of the top contributors to cancer-related deaths in the United States, with over 100,000 estimated cases in 2020 and over 50,000 deaths. The most common screening technique is minimally invasive colonoscopy using either reflected white light endoscopy or narrow-band imaging. However, current imaging modalities have only moderate sensitivity and specificity for lesion detection. We have developed a novel fluorescence excitation-scanning hyperspectral imaging (HSI) approach to sample image and spectroscopic data simultaneously on microscope and endoscope platforms for enhanced diagnostic potential. Unfortunately, fluorescence excitation-scanning HSI datasets pose major challenges for data processing, interpretability, and classification due to their high dimensionality. Here, we present an end-to-end scalable Artificial Intelligence (AI) framework built for classification of excitation-scanning HSI microscopy data that provides accurate image classification and interpretability of the AI decision-making process. The developed AI framework is able to perform real-time HSI classification with different speed/classification performance trade-offs by tailoring the dimensionality of the dataset, supporting different dimensions of deep learning models, and varying the architecture of deep learning models. We have also incorporated tools to visualize the exact location of the lesion detected by the AI decision-making process and to provide heatmap-based pixel-by-pixel interpretability. In addition, our deep learning framework provides wavelength-dependent impact as a heatmap, which allows visualization of the contributions of HSI wavelength bands during the AI decision-making process. This framework is well-suited for HSI microscope and endoscope platforms, where real-time analysis and visualization of classification results are required by clinicians.more » « less
An official website of the United States government

