skip to main content

Title: Extraction of Wearout Model Parameters Using On-Line Test of an SRAM
To accurately determine the reliability of SRAMs, we propose a method to estimate the wearout parameters of FEOL TDDB using on-line data collected during operations. Errors in estimating lifetime model parameters are determined as a function of time, which are based on the available failure sample size. Systematic errors are also computed due to uncertainty in estimation of temperature and supply voltage during operations, as well as uncertainty in process parameters and use conditions.
Authors:
; ; ; ; ;
Award ID(s):
1700914
Publication Date:
NSF-PAR ID:
10205517
Journal Name:
Microelectronics reliability
Volume:
114
Page Range or eLocation-ID:
p. 113756
ISSN:
1872-941X
Sponsoring Org:
National Science Foundation
More Like this
  1. The Lithium-ion battery (Li-ion) has become the dominant energy storage solution in many applications, such as hybrid electric and electric vehicles, due to its higher energy density and longer life cycle. For these applications, the battery should perform reliably and pose no safety threats. However, the performance of Li-ion batteries can be affected by abnormal thermal behaviors, defined as faults. It is essential to develop a reliable thermal management system to accurately predict and monitor thermal behavior of a Li-ion battery. Using the first-principle models of batteries, this work presents a stochastic fault detection and diagnosis (FDD) algorithm to identifymore »two particular faults in Li-ion battery cells, using easily measured quantities such as temperatures. In addition, models used for FDD are typically derived from the underlying physical phenomena. To make a model tractable and useful, it is common to make simplifications during the development of the model, which may consequently introduce a mismatch between models and battery cells. Further, FDD algorithms can be affected by uncertainty, which may originate from either intrinsic time varying phenomena or model calibration with noisy data. A two-step FDD algorithm is developed in this work to correct a model of Li-ion battery cells and to identify faulty operations in a normal operating condition. An iterative optimization problem is proposed to correct the model by incorporating the errors between the measured quantities and model predictions, which is followed by an optimization-based FDD to provide a probabilistic description of the occurrence of possible faults, while taking the uncertainty into account. The two-step stochastic FDD algorithm is shown to be efficient in terms of the fault detection rate for both individual and simultaneous faults in Li-ion batteries, as compared to Monte Carlo (MC) simulations.« less
  2. Today’s systems, rely on sending all the data to the cloud, and then use complex algorithms, such as Deep Neural Networks, which require billions of parameters and many hours to train a model. In contrast, the human brain can do much of this learning effortlessly. Hyperdimensional (HD) Computing aims to mimic the behavior of the human brain by utilizing high dimensional representations. This leads to various desirable properties that other Machine Learning (ML) algorithms lack such as: robustness to noise in the system and simple, highly parallel operations. In this paper, we propose \(\mathsf {HyDREA} \) , a Hy permore »D imensional Computing system that is R obust, E fficient, and A ccurate. We propose a Processing-in-Memory (PIM) architecture that works in a federated learning environment with challenging communication scenarios that cause errors in the transmitted data. \(\mathsf {HyDREA} \) adaptively changes the bitwidth of the model based on the signal to noise ratio (SNR) of the incoming sample to maintain the accuracy of the HD model while achieving significant speedup and energy efficiency. Our PIM architecture is able to achieve a speedup of 28 × and 255 × better energy efficiency compared to the baseline PIM architecture for Classification and achieves 32 × speed up and 289 × higher energy efficiency than the baseline architecture for Clustering. \(\mathsf {HyDREA} \) is able to achieve this by relaxing hardware parameters to gain energy efficiency and speedup while introducing computational errors. We show experimentally, HD Computing is able to handle the errors without a significant drop in accuracy due to its unique robustness property. For wireless noise, we found that \(\mathsf {HyDREA} \) is 48 × more robust to noise than other comparable ML algorithms. Our results indicate that our proposed system loses less than \(1\% \) Classification accuracy, even in scenarios with an SNR of 6.64. We additionally test the robustness of using HD Computing for Clustering applications and found that our proposed system also looses less than \(1\% \) in the mutual information score, even in scenarios with an SNR under 7 dB , which is 57 × more robust to noise than K-means.« less
  3. Concurrent programs are notoriously hard to write correctly, as scheduling nondeterminism introduces subtle errors that are both hard to detect and to reproduce. The most common concurrency errors are (data) races, which occur when memory-conflicting actions are executed concurrently. Consequently, considerable effort has been made towards developing efficient techniques for race detection. The most common approach is dynamic race prediction: given an observed, race-free trace σ of a concurrent program, the task is to decide whether events of σ can be correctly reordered to a trace σ * that witnesses a race hidden in σ. In this work we introducemore »the notion of sync(hronization)-preserving races. A sync-preserving race occurs in σ when there is a witness σ * in which synchronization operations (e.g., acquisition and release of locks) appear in the same order as in σ. This is a broad definition that strictly subsumes the famous notion of happens-before races. Our main results are as follows. First, we develop a sound and complete algorithm for predicting sync-preserving races. For moderate values of parameters like the number of threads, the algorithm runs in Õ( N ) time and space, where N is the length of the trace σ. Second, we show that the problem has a Ω( N /log 2 N ) space lower bound, and thus our algorithm is essentially time and space optimal. Third, we show that predicting races with even just a single reversal of two sync operations is NP-complete and even W1-hard when parameterized by the number of threads. Thus, sync-preservation characterizes exactly the tractability boundary of race prediction, and our algorithm is nearly optimal for the tractable side. Our experiments show that our algorithm is fast in practice, while sync-preservation characterizes races often missed by state-of-the-art methods.« less
  4. Motion planning for high degree-of-freedom (DOF) robots is challenging, especially when acting in complex environments under sensing uncertainty. While there is significant work on how to plan under state uncertainty for low-DOF robots, existing methods cannot be easily translated into the high-DOF case, due to the complex geometry of the robot’s body and its environment. In this paper, we present a method that enhances optimization-based motion planners to produce robust trajectories for high-DOF robots for convex obstacles. Our approach introduces robustness into planners that are based on sequential convex programming: We reformulate each convex subproblem as a robust optimization problemmore »that “protects” the solution against deviations due to sensing uncertainty. The parameters of the robust problem are estimated by sampling from the distribution of noisy obstacles, and performing a first-order approximation of the signed distance function. The original merit function is updated to account for the new costs of the robust formulation at every step. The effectiveness of our approach is demonstrated on two simulated experiments that involve a full body square robot, that moves in randomly generated scenes, and a 7-DOF Fetch robot, performing tabletop operations. The results show nearly zero probability of collision for a reasonable range of the noise parameters for Gaussian and Uniform uncertainty.« less
  5. Simultaneous real-time monitoring of measurement and parameter gross errors poses a great challenge to distribution system state estimation due to usually low measurement redundancy. This paper presents a gross error analysis framework, employing μPMUs to decouple the error analysis of measurements and parameters. When a recent measurement scan from SCADA RTUs and smart meters is available, gross error analysis of measurements is performed as a post-processing step of non-linear DSSE (NLSE). In between scans of SCADA and AMI measurements, a linear state estimator (LSE) using μPMU measurements and linearized SCADA and AMI measurements is used to detect parameter data changesmore »caused by the operation of Volt/Var controls. For every execution of the LSE, the variance of the unsynchronized measurements is updated according to the uncertainty introduced by load dynamics, which are modeled as an Ornstein–Uhlenbeck random process. The update of variance of unsynchronized measurements can avoid the wrong detection of errors and can model the trustworthiness of outdated or obsolete data. When new SCADA and AMI measurements arrive, the LSE provides added redundancy to the NLSE through synthetic measurements. The presented framework was tested on a 13-bus test system. Test results highlight that the LSE and NLSE processes successfully work together to analyze bad data for both measurements and parameters.« less