skip to main content


Search for: All records

Award ID contains: 1846862

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    In this work, a novel multifidelity machine learning (ML) algorithm, the gradient-enhanced multifidelity neural networks (GEMFNN) algorithm, is proposed. This is a multifidelity extension of the gradient-enhanced neural networks (GENN) algorithm as it uses both function and gradient information available at multiple levels of fidelity to make function approximations. Its construction is similar to the multifidelity neural networks (MFNN) algorithm. The proposed algorithm is tested on three analytical functions, a one, two, and a 20 variable function. Its performance is compared to the performance of neural networks (NN), GENN, and MFNN, in terms of the number of samples required to reach a global accuracy of 0.99 of the coefficient of determination (R2). The results showed that GEMFNN required 18, 120, and 600 high-fidelity samples for the one, two, and 20 dimensional cases, respectively, to meet the target accuracy. NN performed best on the one variable case, requiring only ten samples, while GENN worked best on the two variable case, requiring 120 samples. GEMFNN worked best for the 20 variable case, while requiring nearly eight times fewer samples than its nearest competitor, GENN. For this case, NN and MFNN did not reach the target global accuracy even after using 10,000 high-fidelity samples. This work demonstrates the benefits of using gradient as well as multifidelity information in NN for high-dimensional problems.

     
    more » « less
  2. null (Ed.)
    Abstract

    This paper demonstrates the use of the polynomial chaos-based Cokriging (PC-Cokriging) on various simulation-based problems, namely an analytical borehole function, an ultrasonic testing (UT) case and a robust design optimization of an airfoil case. This metamodel is compared to Kriging, polynomial chaos expansion (PCE), polynomial chaos-based Kriging (PC-Kriging) and Cokriging. The PC-Cokriging model is a multi-variate variant of PC-Kriging and its construction is similar to Cokriging. For the borehole function, the PC-Cokriging requires only three high-fidelity samples to accurately capture the global accuracy of the function. For the UT case, it requires 20 points. Sensitivity analysis is performed for the UT case showing that the F-number has negligible effect on the output response. For the robust design case, a 75 and 31 drag count reduction is reported on the mean and standard deviation of the drag coefficient, respectively, when compared to the baseline shape.

     
    more » « less
  3. null (Ed.)
    Abstract The objective of this work is to reduce the cost of performing model-based sensitivity analysis for ultrasonic nondestructive testing systems by replacing the accurate physics-based model with machine learning (ML) algorithms and quickly compute Sobol’ indices. The ML algorithms considered in this work are neural networks (NNs), convolutional NN (CNNs), and deep Gaussian processes (DGPs). The performance of these algorithms is measured by the root mean-squared error on a fixed number of testing points and by the number of high-fidelity samples required to reach a target accuracy. The algorithms are compared on three ultrasonic testing benchmark cases with three uncertainty parameters, namely, spherically void defect under a focused and a planar transducer and spherical-inclusion defect under a focused transducer. The results show that NNs required 35, 100, and 35 samples for the three cases, respectively. CNNs required 35, 100, and 56, respectively, while DGPs required 84, 84, and 56, respectively. 
    more » « less
  4. null (Ed.)
  5. null (Ed.)
  6. null (Ed.)