skip to main content


Title: Learning Convolutional Neural Networks from Ordered Features of Generic Data
Convolutional neural networks (CNN) have become very popular for computer vision, text, and sequence tasks. CNNs have the advantage of being able to learn local patterns through convolution filters. However, generic datasets do not have meaningful local data correlations, because their features are assumed to be independent of each other. In this paper, we propose an approach to reorder features of a generic dataset to create feature correlations for CNN to learn feature representation, and use learned features as inputs to help improve traditional machine learning classifiers. Our experiments on benchmark data exhibit increased performance and illustrate the benefits of using CNNs for generic datasets.  more » « less
Award ID(s):
1828181
NSF-PAR ID:
10122919
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
The 17th IEEE International Conference on Machine Learning and Applications (ICMLA)
Page Range / eLocation ID:
897 to 900
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Convolutional Neural Network (CNN) uses convolutional layers to explore spatial/temporal adjacency to construct new feature representations. So, CNN is commonly used for data with strong temporal/spatial correlations, but cannot be directly applied to generic learning tasks. In this paper, we propose to enable CNN for learning from generic data to improve classification accuracy. To take the full advantage of CNN’s feature learning power, we propose to convert each instance of the original dataset into a synthetic matrix/image format. To maximize the correlation in the constructed matrix/image, we use 0/1 optimization to reorder features and ensure that the ones with strong correlations are adjacent to each other. By using a feature reordering matrix, we are able to create a synthetic image to represent each instance. Because the constructed synthetic image preserves the original feature values and correlation, CNN can be applied to learn effective features for classification. Experiments and comparisons, on 22 benchmark datasets, demonstrate clear performance gain of applying CNN to generic datasets, compared to conventional machine learning methods. Furthermore, our method consistently outperforms approaches which directly apply CNN to generic datasets in naive ways. This research allows deep learning to be broadly applied to generic datasets. 
    more » « less
  2. This paper proposes to enable deep learning for generic machine learning tasks. Our goal is to allow deep learning to be applied to data which are already represented in instance feature tabular format for a better classification accuracy. Because deep learning relies on spatial/temporal correlation to learn new feature representation, our theme is to convert each instance of the original dataset into a synthetic matrix format to take the full advantage of the feature learning power of deep learning methods. To maximize the correlation of the matrix , we use 0/1 optimization to reorder features such that the ones with strong correlations are adjacent to each other. By using a two dimensional feature reordering, we are able to create a synthetic matrix, as an image, to represent each instance. Because the synthetic image preserves the original feature values and data correlation, existing deep learning algorithms, such as convolutional neural networks (CNN), can be applied to learn effective features for classification. Our experiments on 20 generic datasets, using CNN as the deep learning classifier, confirm that enabling deep learning to generic datasets has clear performance gain, compared to generic machine learning methods. In addition, the proposed method consistently outperforms simple baselines of using CNN for generic dataset. As a result, our research allows deep learning to be broadly applied to generic datasets for learning and classification 
    more » « less
  3. Flooding is one of the leading threats of natural disasters to human life and property, especially in densely populated urban areas. Rapid and precise extraction of the flooded areas is key to supporting emergency-response planning and providing damage assessment in both spatial and temporal measurements. Unmanned Aerial Vehicles (UAV) technology has recently been recognized as an efficient photogrammetry data acquisition platform to quickly deliver high-resolution imagery because of its cost-effectiveness, ability to fly at lower altitudes, and ability to enter a hazardous area. Different image classification methods including SVM (Support Vector Machine) have been used for flood extent mapping. In recent years, there has been a significant improvement in remote sensing image classification using Convolutional Neural Networks (CNNs). CNNs have demonstrated excellent performance on various tasks including image classification, feature extraction, and segmentation. CNNs can learn features automatically from large datasets through the organization of multi-layers of neurons and have the ability to implement nonlinear decision functions. This study investigates the potential of CNN approaches to extract flooded areas from UAV imagery. A VGG-based fully convolutional network (FCN-16s) was used in this research. The model was fine-tuned and a k-fold cross-validation was applied to estimate the performance of the model on the new UAV imagery dataset. This approach allowed FCN-16s to be trained on the datasets that contained only one hundred training samples, and resulted in a highly accurate classification. Confusion matrix was calculated to estimate the accuracy of the proposed method. The image segmentation results obtained from FCN-16s were compared from the results obtained from FCN-8s, FCN-32s and SVMs. Experimental results showed that the FCNs could extract flooded areas precisely from UAV images compared to the traditional classifiers such as SVMs. The classification accuracy achieved by FCN-16s, FCN-8s, FCN-32s, and SVM for the water class was 97.52%, 97.8%, 94.20% and 89%, respectively. 
    more » « less
  4. null (Ed.)
    We apply deep convolutional neural networks (CNNs) to estimate wave breaking type (e.g., non-breaking, spilling, plunging) from close-range monochrome infrared imagery of the surf zone. Image features are extracted using six popular CNN architectures developed for generic image feature extraction. Logistic regression on these features is then used to classify breaker type. The six CNN-based models are compared without and with augmentation, a process that creates larger training datasets using random image transformations. The simplest model performs optimally, achieving average classification accuracies of 89% and 93%, without and with image augmentation respectively. Without augmentation, average classification accuracies vary substantially with CNN model. With augmentation, sensitivity to model choice is minimized. A class activation analysis reveals the relative importance of image features to a given classification. During its passage, the front face and crest of a spilling breaker are more important than the back face. For a plunging breaker, the crest and back face of the wave are most important, which suggests that CNN-based models utilize the distinctive ‘streak’ temperature patterns observed on the back face of plunging breakers for classification. 
    more » « less
  5. In the past decade, deep neural networks, and specifically convolutional neural networks (CNNs), have been becoming a primary tool in the field of biomedical image analysis, and are used intensively in other fields such as object or face recognition. CNNs have a clear advantage in their ability to provide superior performance, yet without the requirement to fully understand the image elements that reflect the biomedical problem at hand, and without designing specific algorithms for that task. The availability of easy-to-use libraries and their non-parametric nature make CNN the most common solution to problems that require automatic biomedical image analysis. But while CNNs have many advantages, they also have certain downsides. The features determined by CNNs are complex and unintuitive, and therefore CNNs often work as a “Black Box”. Additionally, CNNs learn from any piece of information in the pixel data that can provide a discriminative signal, making it more difficult to control what the CNN actually learns. Here we follow common practices to test whether CNNs can classify biomedical image datasets, but instead of using the entire image we use merely parts of the images that do not have biomedical content. The experiments show that CNNs can provide high classification accuracy even when they are trained with datasets that do not contain any biomedical information, or can be systematically biased by irrelevant information in the image data. The presence of such consistent irrelevant data is difficult to identify, and can therefore lead to biased experimental results. Possible solutions to this downside of CNNs can be control experiments, as well as other protective practices to validate the results and avoid biased conclusions based on CNN-generated annotations. 
    more » « less