skip to main content


Title: Statistical characterization of random errors present in synchrophasor measurements
The statistical characterization of the measurement errors of a phasor measurement unit (PMU) is currently receiving considerable interest in the power systems community. This paper focuses on the characteristics of the errors in magnitude and angle measurements introduced only by the PMU device (called random errors in this paper), during ambient conditions, using a high-precision calibrator. The experimental results indicate that the random errors follow a non-Gaussian distribution. They also show that the M-class and P-class PMUs have distinct error characteristics. The results of this analysis will help researchers design algorithms that account for the non-Gaussian nature of the errors in synchrophasor measurements, thereby improving the practical utility of the said-algorithms in addition to building on precedence for using high-precision calibrators to perform accurate error tests.  more » « less
Award ID(s):
1934766
NSF-PAR ID:
10290370
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
IEEE Power Energy Society General Meeting
ISSN:
1944-9933
Page Range / eLocation ID:
1-5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Accurate knowledge of transmission line parameters is essential for a variety of power system monitoring, protection, and control applications. The use of phasor measurement unit (PMU) data for transmission line parameter estimation (TLPE) is well-documented. However, existing literature on PMU-based TLPE implicitly assumes the measurement noise to be Gaussian. Recently, it has been shown that the noise in PMU measurements (especially in the current phasors) is better represented by Gaussian mixture models (GMMs), i.e., the noises are non-Gaussian. We present a novel approach for TLPE that can handle non-Gaussian noise in the PMU measurements. The measurement noise is expressed as a GMM, whose components are identified using the expectation-maximization (EM) algorithm. Subsequently, noise and parameter estimation is carried out by solving a maximum likelihood estimation problem iteratively until convergence. The superior performance of the proposed approach over traditional approaches such as least squares and total least squares as well as the more recently proposed minimum total error entropy approach is demonstrated by performing simulations using the IEEE 118-bus system as well as proprietary PMU data obtained from a U.S. power utility. 
    more » « less
  2. Accurate knowledge of transmission line parameters is essential for a variety of power system monitoring, protection, and control applications. The use of phasor measurement unit (PMU) data for transmission line parameter estimation (TLPE) is well-documented. However, existing literature on PMU-based TLPE implicitly assumes the measurement noise to be Gaussian. Recently, it has been shown that the noise in PMU measurements (especially in the current phasors) is better represented by Gaussian mixture models (GMMs), i.e., the noises are non-Gaussian. We present a novel approach for TLPE that can handle non-Gaussian noise in the PMU measurements. The measurement noise is expressed as a GMM, whose components are identified using the expectation-maximization (EM) algorithm. Subsequently, noise and parameter estimation is carried out by solving a maximum likelihood estimation problem iteratively until convergence. The superior performance of the proposed approach over traditional approaches such as least squares and total least squares as well as the more recently proposed minimum total error entropy approach is demonstrated by performing simulations using the IEEE 118-bus system as well as proprietary PMU data obtained from a U.S. power utility. 
    more » « less
  3. Simultaneous real-time monitoring of measurement and parameter gross errors poses a great challenge to distribution system state estimation due to usually low measurement redundancy. This paper presents a gross error analysis framework, employing μPMUs to decouple the error analysis of measurements and parameters. When a recent measurement scan from SCADA RTUs and smart meters is available, gross error analysis of measurements is performed as a post-processing step of non-linear DSSE (NLSE). In between scans of SCADA and AMI measurements, a linear state estimator (LSE) using μPMU measurements and linearized SCADA and AMI measurements is used to detect parameter data changes caused by the operation of Volt/Var controls. For every execution of the LSE, the variance of the unsynchronized measurements is updated according to the uncertainty introduced by load dynamics, which are modeled as an Ornstein–Uhlenbeck random process. The update of variance of unsynchronized measurements can avoid the wrong detection of errors and can model the trustworthiness of outdated or obsolete data. When new SCADA and AMI measurements arrive, the LSE provides added redundancy to the NLSE through synthetic measurements. The presented framework was tested on a 13-bus test system. Test results highlight that the LSE and NLSE processes successfully work together to analyze bad data for both measurements and parameters. 
    more » « less
  4. This paper presents a data-processing technique that improves the accuracy and precision of absorption-spectroscopy measurements by isolating the molecular absorbance signal from errors in the baseline light intensity (Io) using cepstral analysis. Recently, cepstral analysis has been used with traditional absorption spectrometers to create a modified form of the time-domain molecular free-induction decay (m-FID) signal, which can be analyzed independently fromIo. However, independent analysis of the molecular signature is not possible when the baseline intensity and molecular response do not separate well in the time domain, which is typical when using injection-current-tuned lasers [e.g., tunable diode and quantum cascade lasers (QCLs)] and other light sources with pronounced intensity tuning. In contrast, the method presented here is applicable to virtually all light sources since it determines gas properties by least-squares fitting a simulated m-FID signal (comprising an estimatedIoand simulated absorbance spectrum) to the measured m-FID signal in the time domain. This method is insensitive to errors in the estimatedIo, which vary slowly with optical frequency and, therefore, decay rapidly in the time domain. The benefits provided by this method are demonstrated via scanned-wavelength direct-absorption-spectroscopy measurements acquired with a distributed-feedback (DFB) QCL. The wavelength of a DFB QCL was scanned across the CO P(0,20) and P(1,14) absorption transitions at 1 kHz to measure the gas temperature and concentration of CO. Measurements were acquired in a gas cell and in a laminar ethylene–air diffusion flame at 1 atm. The measured spectra were processed using the new m-FID-based method and two traditional methods, which rely on inferring (instead of rejecting) the baseline error within the spectral-fitting routine. The m-FID-based method demonstrated superior accuracy in all cases and a measurement precision that was≈<#comment/>1.5to 10 times smaller than that provided using traditional methods.

     
    more » « less
  5. This paper addresses the deconvolution problem of estimating a square-integrable probability density from observations contaminated with additive measurement errors having a known density. The estimator begins with a density estimate of the contaminated observations and minimizes a reconstruction error penalized by an integrated squared m-th derivative. Theory for deconvolution has mainly focused on kernel- or wavelet-based techniques, but other methods including spline-based techniques and this smoothnesspenalized estimator have been found to outperform kernel methods in simulation studies. This paper fills in some of these gaps by establishing asymptotic guarantees for the smoothness-penalized approach. Consistency is established in mean integrated squared error, and rates of convergence are derived for Gaussian, Cauchy, and Laplace error densities, attaining some lower bounds already in the literature. The assumptions are weak for most results; the estimator can be used with a broader class of error densities than the deconvoluting kernel. Our application example estimates the density of the mean cytotoxicity of certain bacterial isolates under random sampling; this mean cytotoxicity can only be measured experimentally with additive error, leading to the deconvolution problem. We also describe a method for approximating the solution by a cubic spline, which reduces to a quadratic program. 
    more » « less