skip to main content


Title: MVGCN: Multi-View Graph Convolutional Neural Network for Surface Defect Identification Using Three-Dimensional Point Cloud
Abstract Surface defect identification is a crucial task in many manufacturing systems, including automotive, aircraft, steel rolling, and precast concrete. Although image-based surface defect identification methods have been proposed, these methods usually have two limitations: images may lose partial information, such as depths of surface defects, and their precision is vulnerable to many factors, such as the inspection angle, light, color, noise, etc. Given that a three-dimensional (3D) point cloud can precisely represent the multidimensional structure of surface defects, we aim to detect and classify surface defects using a 3D point cloud. This has two major challenges: (i) the defects are often sparsely distributed over the surface, which makes their features prone to be hidden by the normal surface and (ii) different permutations and transformations of 3D point cloud may represent the same surface, so the proposed model needs to be permutation and transformation invariant. In this paper, a two-step surface defect identification approach is developed to investigate the defects’ patterns in 3D point cloud data. The proposed approach consists of an unsupervised method for defect detection and a multi-view deep learning model for defect classification, which can keep track of the features from both defective and non-defective regions. We prove that the proposed approach is invariant to different permutations and transformations. Two case studies are conducted for defect identification on the surfaces of synthetic aircraft fuselage and the real precast concrete specimen, respectively. The results show that our approach receives the best defect detection and classification accuracy compared with other benchmark methods.  more » « less
Award ID(s):
2035038
NSF-PAR ID:
10384549
Author(s) / Creator(s):
; ;  ;  ;
Date Published:
Journal Name:
Journal of Manufacturing Science and Engineering
Volume:
145
Issue:
3
ISSN:
1087-1357
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract: Surface acoustic wave (SAW) sensors with increasingly unique and refined designed patterns are often developed using the lithographic fabrication processes. Emerging applications of SAW sensors often require novel materials, which may present uncharted fabrication outcomes. The fidelity of the SAW sensor performance is often correlated with the ability to restrict the presence of defects in post-fabrication. Therefore, it is critical to have effective means to detect the presence of defects within the SAW sensor. However, labor-intensive manual labeling is often required due to the need for precision identification and classification of surface features for increased confidence in model accuracy. One approach to automating defect detection is to leverage effective machine learning techniques to analyze and quantify defects within the SAW sensor. In this paper, we propose a machine learning approach using a deep convolutional autoencoder to segment surface features semantically. The proposed deep image autoencoder takes a grayscale input image and generates a color image segmenting the defect region in red, metallic interdigital transducing (IDT) fingers in green, and the substrate region in blue. Experimental results demonstrate promising segmentation scores in locating the defects and regions of interest for a novel SAW sensor variant. The proposed method can automate the process of localizing and measuring post-fabrication defects at the pixel level that may be missed by error-prone visual inspection. 
    more » « less
  2. null (Ed.)
    The success of supervised learning requires large-scale ground truth labels which are very expensive, time- consuming, or may need special skills to annotate. To address this issue, many self- or un-supervised methods are developed. Unlike most existing self-supervised methods to learn only 2D image features or only 3D point cloud features, this paper presents a novel and effective self-supervised learning approach to jointly learn both 2D image features and 3D point cloud features by exploiting cross-modality and cross-view correspondences without using any human annotated labels. Specifically, 2D image features of rendered images from different views are extracted by a 2D convolutional neural network, and 3D point cloud features are extracted by a graph convolution neural network. Two types of features are fed into a two-layer fully connected neural network to estimate the cross-modality correspondence. The three networks are jointly trained (i.e. cross-modality) by verifying whether two sampled data of different modalities belong to the same object, meanwhile, the 2D convolutional neural network is additionally optimized through minimizing intra-object distance while maximizing inter-object distance of rendered images in different views (i.e. cross-view). The effectiveness of the learned 2D and 3D features is evaluated by transferring them on five different tasks including multi-view 2D shape recognition, 3D shape recognition, multi-view 2D shape retrieval, 3D shape retrieval, and 3D part-segmentation. Extensive evaluations on all the five different tasks across different datasets demonstrate strong generalization and effectiveness of the learned 2D and 3D features by the proposed self-supervised method. 
    more » « less
  3. Civil infrastructure inspection in hazardous areas such as underwater beams, bridge decks, etc., is a perilous task. In addition, other factors like labor intensity, time, etc. influence the inspection of infrastructures. Recent studies [11] represent that, an autonomous inspection of civil infrastructure can eradicate most of the problems stemming from manual inspection. In this paper, we address the problem of detecting cracks in the concrete surface. Most of the recent crack detection techniques use deep architecture. However, finding the exact location of crack efficiently has been a difficult problem recently. Therefore, a deep architecture is proposed in this paper, to identify the exact location of cracks. Our architecture labels each pixel as crack or non-crack, which eliminates the need for using any existing post-processing techniques in the current literature [5,11]. Moreover, acquiring enough data for learning is another challenge in concrete defect detection. According to previous studies, only 10% of an image contains edge pixels (in our case defected areas) [31]. We proposed a robust data augmentation technique to alleviate the need for collecting more crack image samples. The experimental results show that, with our method, significant accuracy can be obtained with very less sample of data. Our proposed method also outperforms the existing methods of concrete crack classification. 
    more » « less
  4. To alleviate the cost of collecting and annotating large-scale "3D object" point cloud data, we propose an unsupervised learning approach to learn features from an unlabeled point cloud dataset by using part contrasting and object clustering with deep graph convolutional neural networks (GCNNs). In the contrast learning step, all the samples in the 3D object dataset are cut into two parts and put into a "part" dataset. Then a contrast learning GCNN (ContrastNet) is trained to verify whether two randomly sampled parts from the part dataset belong to the same object. In the cluster learning step, the trained ContrastNet is applied to all the samples in the original 3D object dataset to extract features, which are used to group the samples into clusters. Then another GCNN for clustering learning (ClusterNet) is trained from the orignal 3D data to predict the cluster IDs of all the training samples. The contrasting learning forces the ContrastNet to learn semantic features of objects, while the ClusterNet improves the quality of learned features by being trained to discover objects that belong to the same semantic categories by using cluster IDs. We have conducted extensive experiments to evaluate the proposed framework on point cloud classification tasks. The proposed unsupervised learning approach obtains comparable performance to the state-of-the-art with heavier shape auto-encoding unsupervised feature extraction methods. We have also tested the networks on object recognition using partial 3D data, by simulating occlusions and perspective views, and obtained practically useful results. The code of this work is publicly available at: https://github.com/lingzhang1/ContrastNet. 
    more » « less
  5. To alleviate the cost of collecting and annotating large- scale point cloud datasets for 3D scene understanding tasks, we propose an unsupervised learning approach to learn features from unlabeled point cloud ”3D object” dataset by using part contrasting and object clustering with deep graph neural networks (GNNs). In the contrast learn- ing step, all the samples in the 3D object dataset are cut into two parts and put into a ”part” dataset. Then a contrast learning GNN (ContrastNet) is trained to verify whether two randomly sampled parts from the part dataset belong to the same object. In the cluster learning step, the trained ContrastNet is applied to all the samples in the original 3D object dataset to extract features, which are used to group the samples into clusters. Then another GNN for cluster- ing learning (ClusterNet) is trained to predict the cluster IDs of all the training samples. The contrasting learning forces the ContrastNet to learn high-level semantic features of objects but probably ignores low-level features, while the ClusterNet improves the quality of learned features by be- ing trained to discover objects that belong to the same se- mantic categories by using cluster IDs. We have conducted extensive experiments to evaluate the proposed framework on point cloud classification tasks. The proposed unsupervised learning approach obtained comparable performance to the state-of-the-art unsupervised learning methods that used much more complicated network structures. The code and an extended version of this work is publicly available via: https://github.com/lingzhang1/ContrastNet 
    more » « less