Abstract A significant number of investigations have been performed to develop and optimize cold plates for direct-to-chip cooling of processor packages. Many investigations have reported computational simulations using commercially available computational fluid dynamic tools that are compared to experimental data. Generally, the simulations and experimental data are in qualitative agreement but often not in quantitative agreement. Frequently, the experimental characterizations have high experimental uncertainty. In this study, extensive experimental evaluations are used to demonstrate the errors in experimental thermal measurements and the experimental artifacts during testing that lead to unacceptable inconsistency and uncertainty in the reported thermal resistance. By comparing experimental thermal data, such as the temperature at multiple positions on the processor lid, and using that data to extract a meaningful measure of thermal resistance, it is shown that the data uncertainty and inconsistency are primarily due to three factors: (1) inconsistency in the thermal boundary condition supplied by the thermal test vehicle (TTV) to the cold plate, (2) errors in the measurement and interpretation of the surface temperature of a solid surface, such as the heated lid surface, and (3) errors introduced by improper contact between cold plate and TTV. A standard thermal test vehicle (STTV) was engineered and used to provide reproducible thermal boundary conditions to the cold plate. An uncertainty analysis was performed in order to discriminate between the sources of inconsistencies in the reporting of thermal resistance, including parameters such as mechanical load distribution, methods for measuring the cold plate base, and TTV surface temperatures. A critical analysis of the classical thermal resistance definition was performed to emphasize its shortcomings for evaluating the performance of a cold plate. It is shown that the thermal resistance of cold plates based on heat exchanger theory better captures the physics of the heat transfer process when cold plates operate at high thermodynamic effectiveness.
more »
« less
Extraction of Wearout Model Parameters Using On-Line Test of an SRAM
To accurately determine the reliability of SRAMs, we propose a method to estimate the wearout parameters of FEOL TDDB using on-line data collected during operations. Errors in estimating lifetime model parameters are determined as a function of time, which are based on the available failure sample size. Systematic errors are also computed due to uncertainty in estimation of temperature and supply voltage during operations, as well as uncertainty in process parameters and use conditions.
more »
« less
- Award ID(s):
- 1700914
- PAR ID:
- 10205517
- Date Published:
- Journal Name:
- Microelectronics reliability
- Volume:
- 114
- ISSN:
- 1872-941X
- Page Range / eLocation ID:
- p. 113756
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Satellite precipitation products, as all quantitative estimates, come with some inherent degree of uncertainty. To associate a quantitative value of the uncertainty to each individual estimate, error modeling is necessary. Most of the error models proposed so far compute the uncertainty as a function of precipitation intensity only, and only at one specific spatiotemporal scale. We propose a spectral error model that accounts for the neighboring space–time dynamics of precipitation into the uncertainty quantification. Systematic distortions of the precipitation signal and random errors are characterized distinctively in every frequency–wavenumber band in the Fourier domain, to accurately characterize error across scales. The systematic distortions are represented as a deterministic space–time linear filtering term. The random errors are represented as a nonstationary additive noise. The spectral error model is applied to the IMERG multisatellite precipitation product, and its parameters are estimated empirically through a system identification approach using the GV-MRMS gauge–radar measurements as reference (“truth”) over the eastern United States. The filtering term is found to be essentially low-pass (attenuating the fine-scale variability). While traditional error models attribute most of the error variance to random errors, it is found here that the systematic filtering term explains 48% of the error variance at the native resolution of IMERG. This fact confirms that, at high resolution, filtering effects in satellite precipitation products cannot be ignored, and that the error cannot be represented as a purely random additive or multiplicative term. An important consequence is that precipitation estimates derived from different sources shall not be expected to automatically have statistically independent errors. Significance StatementSatellite precipitation products are nowadays widely used for climate and environmental research, water management, risk analysis, and decision support at the local, regional, and global scales. For all these applications, knowledge about the accuracy of the products is critical for their usability. However, products are not systematically provided with a quantitative measure of the uncertainty associated with each individual estimate. Various parametric error models have been proposed for uncertainty quantification, mostly assuming that the uncertainty is only a function of the precipitation intensity at the pixel and time of interest. By projecting satellite precipitation fields and their retrieval errors into the Fourier frequency–wavenumber domain, we show that we can explicitly take into account the neighboring space–time multiscale dynamics of precipitation and compute a scale-dependent uncertainty.more » « less
-
This paper proposes, EFTSanitizer, a fast shadow execution framework for detecting and debugging numerical errors during late stages of testing especially for long-running applications. Any shadow execution framework needs an oracle to compare against the floating point (FP) execution. This paper makes a case for using error free transformations, which is a sequence of operations to compute the error of a primitive operation with existing hardware supported FP operations, as an oracle for shadow execution. Although the error of a single correctly rounded FP operation is bounded, the accumulation of errors across operations can result in exceptions, slow convergences, and even crashes. To ease the job of debugging such errors, EFTSanitizer provides a directed acyclic graph (DAG) that highlights the propagation of errors, which results in exceptions or crashes. Unlike prior work, DAGs produced by EFTSanitizer include operations that span various function calls while keeping the memory usage bounded. To enable the use of such shadow execution tools with long-running applications, EFTSanitizer also supports starting the shadow execution at an arbitrary point in the dynamic execution, which we call selective shadow execution. EFTSanitizer is an order of magnitude faster than prior state-of-art shadow execution tools such as FPSanitizer and Herbgrind. We have discovered new numerical errors and debugged them using EFTSanitizer.more » « less
-
Abstract Massively multiplexed spectrographs will soon gather large statistical samples of stellar spectra. The accurate estimation of uncertainties on derived parameters, such as the line-of-sight velocityvlos, especially for spectra with low signal-to-noise ratios (S/Ns), is paramount. We generated an ensemble of simulated optical spectra of stars as if they were observed with low- and medium-resolution fiber-fed instruments on an 8 m class telescope, similar to the Subaru Prime Focus Spectrograph, and determinedvlosby fitting stellar templates to the simulated spectra. We compared the empirical errors of the derived parameters—calculated from an ensemble of simulations—to the asymptotic errors determined from the Fisher matrix, as well as from Monte Carlo sampling of the posterior probability. We confirm that the uncertainty ofvlosscales with the inverse square root of the S/N, but also show how this scaling breaks down at low S/N and analyze the error and bias caused by template mismatch. We outline a computationally optimized algorithm to fit multiexposure data and provide a mathematical model of stellar spectrum fitting that maximizes the so called significance, which allows for calculating the error from the Fisher matrix analytically. We also introduce the effective line count, and provide a scaling relation to estimate the errors ofvlosmeasurements based on stellar type. Our analysis covers a range of stellar types with parameters that are typical of the Galactic outer disk and halo, together with analogs of stars in M31 and in satellite dwarf spheroidal galaxies around the Milky Way.more » « less
-
Abstract Measurement of time resolved velocities with large accelerations is challenging because the optimal capture rate and pixel resolution changes with velocity. It is known for velocity measurements that high temporal resolution and low pixel resolution increases the velocity uncertainty. This makes selecting acceptable camera settings unintuitive and can result in highly uncertain measurements. For experimental conditions with slow velocities (< 10 m/s) where high temporal resolution is required (because of rapid acceleration) there arises a need for exponentially increasing pixel resolution to minimize experimental uncertainty which is often impossible to achieve experimentally. Desired measurements for early flame propagation have velocities which span a wide range of velocity which can be greater than 10 m/s during ignition and can drop to under 1 m/s depending on the pressure. This rapid velocity change all usually occurs within a millisecond timeframe. Typical camera-based velocity measurement usually observes either fast- or slow-moving objects with either an average velocity or a velocity at a single time. The goal of this work is to accurately measure such a rapidly changing experimental condition using camera-based measurement and understand the affect various processing methods have on the result. A practical method is presented here to quantify the noise and observe any induced errors from improper processing where measurable physical analogs are used to represent future experimental conditions. These experimental analogs are in the form of rotating disks which have known radial and velocity profiles that will enable the assessment of experimental parameters and postprocessing techniques. Parameters considered include pixel resolution, framerate, and smoothing techniques such as moving average, Whittaker, and Savitzky-Golay filters.more » « less
An official website of the United States government

