skip to main content


Title: Extraction of Wearout Model Parameters Using On-Line Test of an SRAM
To accurately determine the reliability of SRAMs, we propose a method to estimate the wearout parameters of FEOL TDDB using on-line data collected during operations. Errors in estimating lifetime model parameters are determined as a function of time, which are based on the available failure sample size. Systematic errors are also computed due to uncertainty in estimation of temperature and supply voltage during operations, as well as uncertainty in process parameters and use conditions.  more » « less
Award ID(s):
1700914
NSF-PAR ID:
10205517
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Microelectronics reliability
Volume:
114
ISSN:
1872-941X
Page Range / eLocation ID:
p. 113756
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The Lithium-ion battery (Li-ion) has become the dominant energy storage solution in many applications, such as hybrid electric and electric vehicles, due to its higher energy density and longer life cycle. For these applications, the battery should perform reliably and pose no safety threats. However, the performance of Li-ion batteries can be affected by abnormal thermal behaviors, defined as faults. It is essential to develop a reliable thermal management system to accurately predict and monitor thermal behavior of a Li-ion battery. Using the first-principle models of batteries, this work presents a stochastic fault detection and diagnosis (FDD) algorithm to identify two particular faults in Li-ion battery cells, using easily measured quantities such as temperatures. In addition, models used for FDD are typically derived from the underlying physical phenomena. To make a model tractable and useful, it is common to make simplifications during the development of the model, which may consequently introduce a mismatch between models and battery cells. Further, FDD algorithms can be affected by uncertainty, which may originate from either intrinsic time varying phenomena or model calibration with noisy data. A two-step FDD algorithm is developed in this work to correct a model of Li-ion battery cells and to identify faulty operations in a normal operating condition. An iterative optimization problem is proposed to correct the model by incorporating the errors between the measured quantities and model predictions, which is followed by an optimization-based FDD to provide a probabilistic description of the occurrence of possible faults, while taking the uncertainty into account. The two-step stochastic FDD algorithm is shown to be efficient in terms of the fault detection rate for both individual and simultaneous faults in Li-ion batteries, as compared to Monte Carlo (MC) simulations. 
    more » « less
  2. Informed decision-making for sustainable manufacturing requires accurate manufacturing process environmental impact models with uncertainty quantification (UQ). For emerging manufacturing technologies, there is often insufficient process data available to derive accurate data-driven models. This paper explores an alternative mechanistic modeling approach using easy-to-access data from a given machine to perform Bayesian inference and reduce the uncertainty of model parameters. First, we derive mechanistic models of the cumulative energy demand (CED) for making aluminum (AlSi10) and nylon (PA12) parts using laser powder bed fusion (L-PBF). Initial parametric uncertainty is assigned to the model inputs informed by literature reviews and interviews with industry experts. Second, we identify the most critical sources of uncertainty using variance-based global sensitivity analyses; therefore, reducing the dimension of the problem. For metal and polymer L-PBF, critical uncertainty is related to the adiabatic efficiency of the process (a measure of the efficiency with which the laser energy is used to fuse the powder) and the recoating time per layer between laser scans. Data pertinent to both of these parameters include the part geometry (height and volume) and total build time. Between three and eight data points on part geometry and build time were collected on two different L-PBF machines and Bayesian inference was performed to reduce the uncertainty of the adiabatic efficiency and recoating time per layer on each machine. This approach was validated by subsequently taking direct parameter measurements on these machines during operation. The delivered electricity uncertainty is reduced by 40-70% after performing inference, highlighting the potential to construct accurate energy and environmental impact models of manufacturing processes using small easy-to-access datasets without interfering with the operations of the manufacturing facility. 
    more » « less
  3. Abstract

    A novel modeling framework that simultaneously improves accuracy, predictability, and computational efficiency is presented. It embraces the benefits of three modeling techniques integrated together for the first time: surrogate modeling, parameter inference, and data assimilation. The use of polynomial chaos expansion (PCE) surrogates significantly decreases computational time. Parameter inference allows for model faster convergence, reduced uncertainty, and superior accuracy of simulated results. Ensemble Kalman filters assimilate errors that occur during forecasting. To examine the applicability and effectiveness of the integrated framework, we developed 18 approaches according to how surrogate models are constructed, what type of parameter distributions are used as model inputs, and whether model parameters are updated during the data assimilation procedure. We conclude that (1) PCE must be built over various forcing and flow conditions, and in contrast to previous studies, it does not need to be rebuilt at each time step; (2) model parameter specification that relies on constrained, posterior information of parameters (so‐calledSelectedspecification) can significantly improve forecasting performance and reduce uncertainty bounds compared toRandomspecification using prior information of parameters; and (3) no substantial differences in results exist between single and dual ensemble Kalman filters, but the latter better simulates flood peaks. The use of PCE effectively compensates for the computational load added by the parameter inference and data assimilation (up to ~80 times faster). Therefore, the presented approach contributes to a shift in modeling paradigm arguing that complex, high‐fidelity hydrologic and hydraulic models should be increasingly adopted for real‐time and ensemble flood forecasting.

     
    more » « less
  4. Abstract

    Cost overruns averaging 45%–50% can occur during the acquisition process of large‐scale complex space programs. The factors that impact the cost overrun are frequently misunderstood and are not identified correctly. This paper investigates the impact of the parameters on the overall cost of a geosynchronous communication satellite program using model‐based global sensitivity analysis. A simulation model with the acquisition data was used to identify the key parameters within the system model that interact with the cost of the program. A system simulation model containing a physics‐based satellite model and a parametric cost model is utilized to conduct variance‐based sensitivity analysis. Data from selected acquisition reports are used to validate the system simulation model. Sobol' analysis is performed on the parameters associated with requirements of the satellite system, operations, and support to maintain the system, including the launch system and ground equipment. The results show that parameters related to the system‐based requirements significantly impact the program cost. These critical parameters, which influence the cost, lay the foundation to quantify the impact of system parameters and their uncertainty on the cost of the system using a simulation‐based model which will aid in the reduction of cost overruns during the design and development of future large‐scale complex engineered systems.

     
    more » « less
  5. Abstract

    Measurement of time resolved velocities with large accelerations is challenging because the optimal capture rate and pixel resolution changes with velocity. It is known for velocity measurements that high temporal resolution and low pixel resolution increases the velocity uncertainty. This makes selecting acceptable camera settings unintuitive and can result in highly uncertain measurements. For experimental conditions with slow velocities (< 10 m/s) where high temporal resolution is required (because of rapid acceleration) there arises a need for exponentially increasing pixel resolution to minimize experimental uncertainty which is often impossible to achieve experimentally. Desired measurements for early flame propagation have velocities which span a wide range of velocity which can be greater than 10 m/s during ignition and can drop to under 1 m/s depending on the pressure. This rapid velocity change all usually occurs within a millisecond timeframe.

    Typical camera-based velocity measurement usually observes either fast- or slow-moving objects with either an average velocity or a velocity at a single time. The goal of this work is to accurately measure such a rapidly changing experimental condition using camera-based measurement and understand the affect various processing methods have on the result. A practical method is presented here to quantify the noise and observe any induced errors from improper processing where measurable physical analogs are used to represent future experimental conditions. These experimental analogs are in the form of rotating disks which have known radial and velocity profiles that will enable the assessment of experimental parameters and postprocessing techniques. Parameters considered include pixel resolution, framerate, and smoothing techniques such as moving average, Whittaker, and Savitzky-Golay filters.

     
    more » « less