skip to main content


This content will become publicly available on February 1, 2025

Title: Automated Evaluation and Rating of Product Repairability Using Artificial Intelligence-Based Approaches
Abstract

Despite the importance of product repairability, current methods for assessing and grading repairability are limited, which hampers the efforts of designers, remanufacturers, original equipment manufacturers (OEMs), and repair shops. To improve the efficiency of assessing product repairability, this study introduces two artificial intelligence (AI) based approaches. The first approach is a supervised learning framework that utilizes object detection on product teardown images to measure repairability. Transfer learning is employed with machine learning architectures such as ConvNeXt, GoogLeNet, ResNet50, and VGG16 to evaluate repairability scores. The second approach is an unsupervised learning framework that combines feature extraction and cluster learning to identify product design features and group devices with similar designs. It utilizes an oriented FAST and rotated BRIEF feature extractor (ORB) along with k-means clustering to extract features from teardown images and categorize products with similar designs. To demonstrate the application of these assessment approaches, smartphones are used as a case study. The results highlight the potential of artificial intelligence in developing an automated system for assessing and rating product repairability.

 
more » « less
Award ID(s):
2026276
PAR ID:
10543995
Author(s) / Creator(s):
; ;
Publisher / Repository:
ASME
Date Published:
Journal Name:
Journal of Manufacturing Science and Engineering
Volume:
146
Issue:
2
ISSN:
1087-1357
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Inferences of adaptive events are important for learning about traits, such as human digestion of lactose after infancy and the rapid spread of viral variants. Early efforts toward identifying footprints of natural selection from genomic data involved development of summary statistic and likelihood methods. However, such techniques are grounded in simple patterns or theoretical models that limit the complexity of settings they can explore. Due to the renaissance in artificial intelligence, machine learning methods have taken center stage in recent efforts to detect natural selection, with strategies such as convolutional neural networks applied to images of haplotypes. Yet, limitations of such techniques include estimation of large numbers of model parameters under nonconvex settings and feature identification without regard to location within an image. An alternative approach is to use tensor decomposition to extract features from multidimensional data although preserving the latent structure of the data, and to feed these features to machine learning models. Here, we adopt this framework and present a novel approach termed T-REx, which extracts features from images of haplotypes across sampled individuals using tensor decomposition, and then makes predictions from these features using classical machine learning methods. As a proof of concept, we explore the performance of T-REx on simulated neutral and selective sweep scenarios and find that it has high power and accuracy to discriminate sweeps from neutrality, robustness to common technical hurdles, and easy visualization of feature importance. Therefore, T-REx is a powerful addition to the toolkit for detecting adaptive processes from genomic data.

     
    more » « less
  2. Abstract

    Machine learning allows “the machine” to deduce the complex and sometimes unrecognized rules governing spatial systems, particularly topographic mapping, by exposing it to the end product. Often, the obstacle to this approach is the acquisition of many good and labeled training examples of the desired result. Such is the case with most types of natural features. To address such limitations, this research introduces GeoNat v1.0, a natural feature dataset, used to support artificial intelligence‐based mapping and automated detection of natural features under a supervised learning paradigm. The dataset was created by randomly selecting points from the U.S. Geological Survey’s Geographic Names Information System and includes approximately 200 examples each of 10 classes of natural features. Resulting data were tested in an object‐detection problem using a region‐based convolutional neural network. The object‐detection tests resulted in a 62% mean average precision as baseline results. Major challenges in developing training data in the geospatial domain, such as scale and geographical representativeness, are addressed in this article. We hope that the resulting dataset will be useful for a variety of applications and shed light on training data collection and labeling in the geospatial artificial intelligence domain.

     
    more » « less
  3. Taking incompatible multiple drugs together may cause adverse interactions and side effects on the body. Accurate prediction of drug-drug interaction (DDI) events is essential for avoiding this issue. Recently, various artificial intelligence-based approaches have been proposed for predicting DDI events. However, DDI events are associated with complex relationships and mechanisms among drugs, targets, enzymes, transporters, molecular structures, etc. Existing approaches either partially or loosely consider these relationships and mechanisms by a non-end-to-end learning framework, resulting in sub-optimal feature extractions and fusions for prediction. Different from them, this paper proposes a Multimodal Knowledge Graph Fused End-to-end Neural Network (MKGFENN) that consists of two main parts: multimodal knowledge graph (MKG) and fused end-to-end neural network (FENN). First, MKG is constructed by comprehensively exploiting DDI events-associated relationships and mechanisms from four knowledge graphs of drugs-chemical entities, drug-substructures, drugs-drugs, and molecular structures. Correspondingly, a four channels graph neural network is designed to extract high-order and semantic features from MKG. Second, FENN designs a multi-layer perceptron to fuse the extracted features by end-to-end learning. With such designs, the feature extractions and fusions of DDI events are guaranteed to be comprehensive and optimal for prediction. Through extensive experiments on real drug datasets, we demonstrate that MKG-FENN exhibits high accuracy and significantly outperforms state-of-the-art models in predicting DDI events. The source code and supplementary file of this article are available on: https://github.com/wudi1989/MKG-FENN.

     
    more » « less
  4. Abstract

    Even highly motivated undergraduates drift off their STEM career pathways. In large introductory STEM classes, instructors struggle to identify and support these students. To address these issues, we developed co‐redesign methods in partnership with disciplinary experts to create high‐structure STEM courses that better support students and produce informative digital event data. To those data, we applied theory‐ and context‐relevant labels to reflect active and self‐regulated learning processes involving LMS‐hosted course materials, formative assessments, and help‐seeking tools. We illustrate the predictive benefits of this process across two cycles of model creation and reapplication. In cycle 1, we used theory‐relevant features from 3 weeks of data to inform a prediction model that accurately identified struggling students and sustained its accuracy when reapplied in future semesters. In cycle 2, we refit a model with temporally contextualized features that achieved superior accuracy using data from just two class meetings. This modelling approach can produce durable learning analytics solutions that afford scaled and sustained prediction and intervention opportunities that involve explainable artificial intelligence products. Those same products that inform prediction can also guide intervention approaches and inform future instructional design and delivery.Practitioner notesWhat is already known about this topic

    Learning analytics includes an evolving collection of methods for tracing and understanding student learning through their engagements with learning technologies.

    Prediction models based on demographic data can perpetuate systemic biases.

    Prediction models based on behavioural event data can produce accurate predictions of academic success, and validation efforts can enrich those data to reflect students' self‐regulated learning processes within learning tasks.

    What this paper adds

    Learning analytics can be successfully applied to predict performance in an authentic postsecondary STEM context, and the use of context and theory as guides for feature engineering can ensure sustained predictive accuracy upon reapplication.

    The consistent types of learning resources and cyclical nature of their provisioning from lesson to lesson are hallmarks of high‐structure active learning designs that are known to benefit learners. These designs also provide opportunities for observing and modelling contextually grounded, theory‐aligned and temporally positioned learning events that informed prediction models that accurately classified students upon initial and later reapplications in subsequent semesters.

    Co‐design relationships where researchers and instructors work together toward pedagogical implementation and course instrumentation are essential to developing unique insights for feature engineering and producing explainable artificial intelligence approaches to predictive modelling.

    Implications for practice and/or policy

    High‐structure course designs can scaffold student engagement with course materials to make learning more effective and products of feature engineering more explainable.

    Learning analytics initiatives can avoid perpetuation of systemic biases when methods prioritize theory‐informed behavioural data that reflect learning processes, sensitivity to instructional context and development of explainable predictors of success rather than relying on students' demographic characteristics as predictors.

    Prioritizing behaviours as predictors improves explainability in ways that can inform the redesign of courses and design of learning supports, which further informs the refinement of learning theories and their applications.

     
    more » « less
  5. Abstract

    An instantaneous and precise coating inspection method is imperative to mitigate the risk of flaws, defects, and discrepancies on coated surfaces. While many studies have demonstrated the effectiveness of automated visual inspection (AVI) approaches enhanced by computer vision and deep learning, critical challenges exist for practical applications in the manufacturing domain. Computer vision has proven to be inflexible, demanding sophisticated algorithms for diverse feature extraction. In deep learning, supervised approaches are constrained by the need for annotated datasets, whereas unsupervised methods often result in lower performance. Addressing these challenges, this paper proposes a novel deep learning-based automated visual inspection (AVI) framework designed to minimize the necessity for extensive feature engineering, programming, and manual data annotation in classifying fuel injection nozzles and discerning their coating interfaces from scratch. This proposed framework comprises six integral components: It begins by distinguishing between coated and uncoated nozzles through gray level co-occurrence matrix (GLCM)-based texture analysis and autoencoder (AE)-based classification. This is followed by cropping surface images from uncoated nozzles, and then building an AE model to estimate the coating interface locations on coated nozzles. The next step involves generating autonomously annotated datasets derived from these estimated coating interface locations. Subsequently, a convolutional neural network (CNN)-based detection model is trained to accurately localize the coating interface locations. The final component focuses on enhancing model performance and trustworthiness. This framework demonstrated over 95% accuracy in pinpointing the coating interfaces within the error range of ± 6 pixels and processed at a rate of 7.18 images per second. Additionally, explainable artificial intelligence (XAI) techniques such as t-distributed stochastic neighbor embedding (t-SNE) and the integrated gradient substantiated the reliability of the models.

     
    more » « less