skip to main content


Title: Memetic Optimizer for Structural Damage Identification Using Electromechanical Admittance
Abstract

Electromechanical impedance-based (EMI) techniques using piezoelectric transducers are promising for structural damage identification. They can be implemented in high frequency range with small characteristic wavelengths, leading to high detection sensitivity. The impedance measured is the outcome of harmonic and stationary excitation, which makes it easier to conduct inverse analysis for damage localization and quantification. Nevertheless, the EMI data measurement points are usually limited, thus oftentimes resulting in an under-determined problem. To address this issue, damage identification process can be converted into a multi-objective optimization formulation which naturally yields multiple solutions. While this setup fits the nature of damage identification that a number of possibilities may exist under given observations/measurements, existing algorithms may suffer from premature convergence and entrapment in local extremes. Consequently, the solutions found may not cover the true damage scenario. To tackle these challenges, in this research, a series of local search strategies are tailored to enhance the global searching ability and incorporated into particle swarm-based optimization. The Q-table is utilized to help the algorithm select proper local search strategy based on the maximum Q-table values. Case studies are carried out for verification, and the results show that the proposed memetic algorithm achieves good performance in damage identification.

 
more » « less
Award ID(s):
1825324
NSF-PAR ID:
10395149
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the ASME 2022 International Design Engineering Technical Conferences and Computers & Information in Engineering Conference
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Impedance-based structural health monitoring (SHM) is recognized as a non-intrusive, highly sensitive, and model-independent SHM solution that is readily applicable to complex structures. This SHM method relies on analyzing the electromechanical impedance (EMI) signature of the structure under test over the time span of its operation. Changes in the EMI signature, compared to a baseline measured at the healthy state of the structure, often indicate damage. This method has successfully been applied to assess the integrity of numerous civil, aerospace, and mechanical components and structures. However, EMI sensitivity to environmental conditions, the temperature, in particular, has been an ongoing challenge facing the wide adoption of this method. Temperature-induced variation in EMI signatures can be misinterpreted as damage, leading to false positives, or may overshadow the effects of incipient damage in the structure.

    In this paper, a new method for temperature compensation of EMI signature is presented. Data-driven dynamic models are first developed by fitting EMI signatures measured at various temperatures using the Vector Fitting algorithm. Once these models are developed, the dependence of model parameters on temperature is established. A parametric data-driven model is then derived with temperature as a parameter. This allows for EMI signatures to be calculated at any desired temperature. The capabilities of this new temperature compensation method are demonstrated on aluminum samples, where EMI signatures are measured at various temperatures. The developed method is found to be capable of temperature compensation of EMI signatures at a broad frequency range.

     
    more » « less
  2. Piezoelectric impedance sensing is promising for highly accurate damage identification because of its high-frequency active interrogative nature and simplicity in data acquisition. To fully unleash the potential, effective inverse analysis is needed in order to pinpoint the damage location and identify the severity. The inverse analysis, however, may be underdetermined since there exists a very large number of unknowns (i.e., locations and severity levels) to be solved in a finite element model but only limited measurements are available in actual practice. To uncover the true damage scenario, an inverse analysis strategy built upon the multi-objective optimization, which aims at matching the multiple sets of measurements with model predictions in the damage parametric space, can be formulated to identify a small set of solutions. This solution set then allows the incorporation of empirical knowledge to facilitate final decision-making. The main disadvantage of the conventional inverse analysis strategy is that it overlooks uncertainties that exist in both baseline structural modeling and actual measurements. To address this, in this research, we formulate a probabilistic multi-objective optimization-based inverse analysis framework, which is fundamentally built upon the differential evolution Markov chain Monte Carlo (DEMC) technique. The new approach can yield the Pareto optimal set (solutions) and the respective Pareto front, which are represented in a probabilistic sense to account for uncertainties. Comprehensive case studies with experimental investigations are conducted to demonstrate the effectiveness of this new approach. 
    more » « less
  3. Deep learning for computer vision depends on lossy image compression: it reduces the storage required for training and test data and lowers transfer costs in deployment. Mainstream datasets and imaging pipelines all rely on standard JPEG compression. In JPEG, the degree of quantization of frequency coefficients controls the lossiness: an 8x8 quantization table (Q-table) decides both the quality of the encoded image and the compression ratio. While a long history of work has sought better Q-tables, existing work either seeks to minimize image distortion or to optimize for models of the human visual system. This work asks whether JPEG Q-tables exist that are “better” for specific vision networks and can offer better quality–size trade-offs than ones designed for human perception or minimal distortion. We reconstruct an ImageNet test set with higher resolution to explore the effect of JPEG compression under novel Q-tables. We attempt several approaches to tune a Q-table for a vision task. We find that a simple sorted random sampling method can exceed the performance of the standard JPEG Q-table. We also use hyper-parameter tuning techniques including bounded random search, Bayesian optimization, and composite heuristic optimization methods. The new Q-tables we obtained can improve the compression rate by 10% to 200% when the accuracy is fixed, or improve accuracy up to 2% at the same compression rate. 
    more » « less
  4. Deep learning for computer vision depends on lossy image compression: it reduces the storage required for training and test data and lowers transfer costs in deployment. Mainstream datasets and imaging pipelines all rely on standard JPEG compression. In JPEG, the degree of quantization of frequency coefficients controls the lossiness: an 88 quantization table (Q-table) decides both the quality of the encoded image and the compression ratio. While a long history of work has sought better Q-tables, existing work either seeks to minimize image distortion or to optimize for models of the human visual system. This work asks whether JPEG Q-tables exist that are “better” for specific vision networks and can offer better quality–size trade-offs than ones designed for human perception or minimal distortion. We reconstruct an ImageNet test set with higher resolution to explore the effect of JPEG compression under novel Q-tables. We attempt several approaches to tune a Q-table for a vision task. We find that a simple sorted random sampling method can exceed the performance of the standard JPEG Q-table. We also use hyper-parameter tuning techniques including bounded random search, Bayesian optimization, and composite heuristic optimization methods. The new Q-tables we obtained can improve the compression rate by 10% to 200% when the accuracy is fixed, or improve accuracy up to 2% at the same compression rate. 
    more » « less
  5. Abstract

    Linear regression is a fundamental modeling tool in statistics and related fields. In this paper, we study an important variant of linear regression in which the predictor-response pairs are partially mismatched. We use an optimization formulation to simultaneously learn the underlying regression coefficients and the permutation corresponding to the mismatches. The combinatorial structure of the problem leads to computational challenges. We propose and study a simple greedy local search algorithm for this optimization problem that enjoys strong theoretical guarantees and appealing computational performance. We prove that under a suitable scaling of the number of mismatched pairs compared to the number of samples and features, and certain assumptions on problem data; our local search algorithm converges to a nearly-optimal solution at a linear rate. In particular, in the noiseless case, our algorithm converges to the global optimal solution with a linear convergence rate. Based on this result, we prove an upper bound for the estimation error of the parameter. We also propose an approximate local search step that allows us to scale our approach to much larger instances. We conduct numerical experiments to gather further insights into our theoretical results, and show promising performance gains compared to existing approaches.

     
    more » « less