skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: DQ Admittance Model Extraction for IBRs via Gaussian Pulse Excitation
While dq admittance models have shown to be very useful for stability analysis, extracting admittance models of inverter-based resources (IBRs) from the electromagnetic transient (EMT) simulation environment using frequency scans takes time. In this letter, a new perturbation method based on Gaussian pulses in combination with the system identification algorithms shows great promise for parametric dq admittance model extraction. We present the dq admittance model extracting method for a type-4 wind turbine. Challenges in implementing Gaussian pulse excitation are also pointed out. The extracted dq admittance model via the new method shows to have a high matching degree with the measurements obtained from frequency scans.  more » « less
Award ID(s):
1807974
PAR ID:
10475619
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Transactions on Power Systems
Volume:
38
Issue:
3
ISSN:
0885-8950
Page Range / eLocation ID:
2966 to 2969
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we demonstrate methods to extract dq admittance for a solar photovoltaic (PV) farm from its black-box model used for electromagnetic transient (EMT) simulation. Each dq admittance corresponds to a certain operating condition. Based on the dq admittance, analysis is carried out to evaluate how grid strength and solar irradiance may influence stability. Two types of stability analysis methods (open-loop system based and closed-loop system based) are examined and both can deal with dq admittance's frequency-domain measurements directly and produce graphics for stability analysis. The findings based on dq admittance-based analysis are shown to corroborate EMT simulation results. 
    more » « less
  2. Abstract Objective In response to COVID-19, the informatics community united to aggregate as much clinical data as possible to characterize this new disease and reduce its impact through collaborative analytics. The National COVID Cohort Collaborative (N3C) is now the largest publicly available HIPAA limited dataset in US history with over 6.4 million patients and is a testament to a partnership of over 100 organizations. Materials and Methods We developed a pipeline for ingesting, harmonizing, and centralizing data from 56 contributing data partners using 4 federated Common Data Models. N3C data quality (DQ) review involves both automated and manual procedures. In the process, several DQ heuristics were discovered in our centralized context, both within the pipeline and during downstream project-based analysis. Feedback to the sites led to many local and centralized DQ improvements. Results Beyond well-recognized DQ findings, we discovered 15 heuristics relating to source Common Data Model conformance, demographics, COVID tests, conditions, encounters, measurements, observations, coding completeness, and fitness for use. Of 56 sites, 37 sites (66%) demonstrated issues through these heuristics. These 37 sites demonstrated improvement after receiving feedback. Discussion We encountered site-to-site differences in DQ which would have been challenging to discover using federated checks alone. We have demonstrated that centralized DQ benchmarking reveals unique opportunities for DQ improvement that will support improved research analytics locally and in aggregate. Conclusion By combining rapid, continual assessment of DQ with a large volume of multisite data, it is possible to support more nuanced scientific questions with the scale and rigor that they require. 
    more » « less
  3. Topological data analysis (TDA) has proven to be a potent approach for extracting intricate topological structures from complex and high-dimensional data. In this paper, we propose a TDA-based processing pipeline for analyzing multi-channel scalp EEG data. The pipeline starts with extracting both frequency and temporal information from the signals via the Hilbert–Huang Transform. The sequences of instantaneous frequency and instantaneous amplitude across all electrode channels are treated as approximations of curves in the high-dimensional space. TDA features, which represent the local topological structure of the curves, are further extracted and used in the classification models. Three sets of scalp EEG data, including one collected in a lab and two Brain–computer Interface (BCI) competition data, were used to validate the proposed methods, and compare with other state-of-art TDA methods. The proposed TDA-based approach shows superior performance and outperform the winner of the BCI competition. Besides BCI, the proposed method can also be applied to spatial and temporal data in other domains such as computer vision, remote sensing, and medical imaging. 
    more » « less
  4. This study compares the frequency spectra of seasonal precipitation during the last millenium from climate model simulations, tree-ring-based reconstructions, and gauge-based gridded observations of the twentieth century. Climate model simulations are from phase 6 of the Coupled Model Intercomparison Project (CMIP6) past1000 experiment, while tree-ring reconstructions are derived from the North American Seasonal Precipitation Atlas (NASPA). NASPA and CMIP6 model output are analyzed to understand their unique frequency biases in high-, mid-, and low-frequency ranges for both paleo-climatic millennium and recent centennial time series across North America. This was accomplished by first extracting signals from periodic ranges of 2–6, 4–15, 10–30, 20–50, and 30–110 years and then analyzing the result in Fourier space. This study reveals that the NASPA shows better alignment with observations in the frequency domain than global climate models (GCMs) even for low-frequency components. Moreover, the spatial distributions of the spectral biases indicate that there are significant disagreements between NASPA and GCMs in the east during cool seasons and in the west of North America during warm seasons for both historical centennial and preindustrial millennial periods. This is likely caused by NASPA tree-ring sensitivity, as its distribution roughly mirrors NASPA skill metrics. Notably, the spatial patterns of spectral biases differ between the modern and preindustrial eras, suggesting a changing bias through time. This study provides a new frequency-based metric to evaluate climate models and reconstructions and provides a first comparison of the two for North America. 
    more » « less
  5. Training machine learning (ML) models for scientific problems is often challenging due to limited observation data. To overcome this challenge, prior works commonly pre-train ML models using simulated data before having them fine-tuned with small real data. Despite the promise shown in initial research across different domains, these methods cannot ensure improved performance after fine-tuning because (i) they are not designed for extracting generalizable physics-aware features during pre-training, (ii) the features learned from pre-training can be distorted by the fine-tuning process. In this paper, we propose a new learning method for extracting, preserving, and adapting physics-aware features. We build a knowledge-guided neural network (KGNN) model based on known dependencies amongst physical variables, which facilitate extracting physics-aware feature representation from simulated data. Then we fine-tune this model by alternately updating the encoder and decoder of the KGNN model to enhance the prediction while preserving the physics-aware features learned through pre-training. We further propose to adapt the model to new testing scenarios via a teacher-student learning framework based on the model uncertainty. The results demonstrate that the proposed method outperforms many baselines by a good margin, even using sparse training data or under out-of-sample testing scenarios. 
    more » « less