Foundational and state-of-the-art anomaly-detection methods through power system state estimation are reviewed. Traditional components for bad data detection, such as chi-square testing, residual-based methods, and hypothesis testing, are discussed to explain the motivations for recent anomaly-detection methods given the increasing complexity of power grids, energy management systems, and cyber-threats. In particular, state estimation anomaly detection based on data-driven quickest-change detection and artificial intelligence are discussed, and directions for research are suggested with particular emphasis on considerations of the future smart grid.
more »
« less
C0 interior penalty methods for an elliptic distributed optimal control problem with general tracking and pointwise state constraints
We consider C0 interior penalty methods for a linear-quadratic elliptic distributed optimal control problem with pointwise state constraints in two spatial dimensions, where the cost function tracks the state at points, curves and regions of the domain. Error estimates and numerical results that illustrate the performance of the methods are presented.
more »
« less
- Award ID(s):
- 2208404
- PAR ID:
- 10508359
- Publisher / Repository:
- Elsevier
- Date Published:
- Journal Name:
- Computers & Mathematics with Applications
- Volume:
- 155
- Issue:
- C
- ISSN:
- 0898-1221
- Page Range / eLocation ID:
- 80 to 90
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The simulation of excited states at low computational cost remains an open challenge for electronic structure (ES) methods. While much attention has been given to orthogonal ES methods, relatively little work has been done to develop nonorthogonal ES methods for excited states, particularly those involving nonorthogonal orbital optimization. We present here a numerically stable formulation of the Resonating Hartree–Fock (ResHF) method that uses the matrix adjugate to remove numerical instabilities arising from nearly orthogonal orbitals, and as a result, we demonstrate improvements to ResHF wavefunction optimization. We then benchmark the performance of ResHF against complete active space self-consistent field in the avoided crossing of LiF, the torsional rotation of ethene, and the singlet–triplet energy gaps of a selection of small molecules. ResHF is a promising excited state method because it incorporates the orbital relaxation of state-specific methods, while retaining the correct state crossings of state-averaged approaches. Our open-source ResHF implementation, yucca, is available on GitLab.more » « less
-
In partially observable reinforcement learning, offline training gives access to latent information which is not available during online training and/or execution, such as the system state. Asymmetric actor-critic methods exploit such information by training a history-based policy via a state-based critic. However, many asymmetric methods lack theoretical foundation, and are only evaluated on limited domains. We examine the theory of asymmetric actor-critic methods which use state-based critics, and expose fundamental issues which undermine the validity of a common variant, and limit its ability to address partial observability. We propose an unbiased asymmetric actor-critic variant which is able to exploit state information while remaining theoretically sound, maintaining the validity of the policy gradient theorem, and introducing no bias and relatively low variance into the training process. An empirical evaluation performed on domains which exhibit significant partial observability confirms our analysis, demonstrating that unbiased asymmetric actor-critic converges to better policies and/or faster than symmetric and biased asymmetric baselines.more » « less
-
Recent advancements in two-photon calcium imaging have enabled scientists to record the activity of thousands of neurons with cellular resolution. This scope of data collection is crucial to understanding the next generation of neuroscience questions, but analyzing these large recordings requires automated methods for neuron segmentation. Supervised methods for neuron segmentation achieve state of-the-art accuracy and speed but currently require large amounts of manually generated ground truth training labels. We reduced the required number of training labels by designing a semi-supervised pipeline. Our pipeline used neural network ensembling to generate pseudolabels to train a single shallow U-Net. We tested our method on three publicly available datasets and compared our performance to three widely used segmentation methods. Our method outperformed other methods when trained on a small number of ground truth labels and could achieve state-of-the-art accuracy after training on approximately a quarter of the number of ground truth labels as supervised methods. When trained on many ground truth labels, our pipeline attained higher accuracy than that of state-of-the-art methods. Overall, our work will help researchers accurately process large neural recordings while minimizing the time and effort needed to generate manual labels.more » « less
-
Various methods for Multi-Agent Reinforcement Learning (MARL) have been developed with the assumption that agents' policies are based on accurate state information. However, policies learned through Deep Reinforcement Learning (DRL) are susceptible to adversarial state perturbation attacks. In this work, we propose a State-Adversarial Markov Game (SAMG) and make the first attempt to investigate different solution concepts of MARL under state uncertainties. Our analysis shows that the commonly used solution concepts of optimal agent policy and robust Nash equilibrium do not always exist in SAMGs. To circumvent this difficulty, we consider a new solution concept called robust agent policy, where agents aim to maximize the worst-case expected state value. We prove the existence of robust agent policy for finite state and finite action SAMGs. Additionally, we propose a Robust Multi-Agent Adversarial Actor-Critic (RMA3C) algorithm to learn robust policies for MARL agents under state uncertainties. Our experiments demonstrate that our algorithm outperforms existing methods when faced with state perturbations and greatly improves the robustness of MARL policies. Our code is public on https://songyanghan.github.io/what_is_solution/.more » « less
An official website of the United States government

