Recently, there has been a major emphasis on developing data-driven approaches involving machine learning (ML) for high-speed static state estimation (SE) in power systems. The emphasis stems from the ability of ML to overcome difficulties associated with model-based approaches, such as handling of non-Gaussian measurement noise. However, topology changes pose a stiff challenge for performing ML-based SE because the training and test environments become different when such changes occur. This paper circumvents this challenge by formulating a graph neural network (GNN)-based time-synchronized state estimator that considers the physical connections of the power system during the training itself. The results obtained using the IEEE 118-bus system indicate that the GNN-based state estimator outperforms both the model-based linear state estimator and a data-driven deep neural network-based state estimator in the presence of non-Gaussian measurement noise and topology changes, respectively. 
                        more » 
                        « less   
                    
                            
                            Analytical verification of deep neural network performance for time-synchronized distribution system state estimation
                        
                    
    
            Recently, we demonstrated the success of a time-synchronized state estimator using deep neural networks (DNNs) for real-time unobservable distribution systems. In this paper, we provide analytical bounds on the performance of the state estimator as a function of perturbations in the input measurements. It has already been shown that evaluating performance based only on the test dataset might not effectively indicate the ability of a trained DNN to handle input perturbations. As such, we analytically verify the robustness and trustworthiness of DNNs to input perturbations by treating them as mixed-integer linear programming (MILP) problems. The ability of batch normalization in addressing the scalability limitations of the MILP formulation is also highlighted. The framework is validated by performing time-synchronized distribution system state estimation for a modified IEEE 34-node system and a real-world large distribution system, both of which are incompletely observed by micro-phasor measurement units. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2145063
- PAR ID:
- 10565109
- Publisher / Repository:
- SGEPRI
- Date Published:
- Journal Name:
- Journal of Modern Power Systems and Clean Energy
- Volume:
- 12
- Issue:
- 4
- ISSN:
- 2196-5625
- Page Range / eLocation ID:
- 1126 to 1134
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)The distribution system is an integral part of the electric power system, but not much is known about how it behaves in real-time. To address this knowledge gap, a low-cost, time-synchronized, micro point-on-wave recorder is designed, built, and characterized in this paper. The inductively powered recorder operates wirelessly by using the current flowing through a typical distribution conductor. The recorder is designed to be small, lightweight, and is intended to be installed directly on the power line. To validate the performance of this recorder, tests of measurement accuracy, electric current requirements, and susceptibility to electromagnetic interference from both steady-state and arc-induced sources are performed. The results indicate that the proposed recorder satisfies both the technical as well as the economical constraints required for bulk deployment in an actual distribution network.more » « less
- 
            The deep neural network (DNN) model for computer vision tasks (object detection and classification) is widely used in autonomous vehicles, such as driverless cars and unmanned aerial vehicles. However, DNN models are shown to be vulnerable to adversarial image perturbations. The generation of adversarial examples against inferences of DNNs has been actively studied recently. The generation typically relies on optimizations taking an entire image frame as the decision variable. Hence, given a new image, the computationally expensive optimization needs to start over as there is no learning between the independent optimizations. Very few approaches have been developed for attacking online image streams while taking into account the underlying physical dynamics of autonomous vehicles, their mission, and the environment. The article presents a multi-level reinforcement learning framework that can effectively generate adversarial perturbations to misguide autonomous vehicles’ missions. In the existing image attack methods against autonomous vehicles, optimization steps are repeated for every image frame. This framework removes the need for fully converged optimization at every frame. Using multi-level reinforcement learning, we integrate a state estimator and a generative adversarial network that generates the adversarial perturbations. Due to the reinforcement learning agent consisting of state estimator, actor, and critic that only uses image streams, the proposed framework can misguide the vehicle to increase the adversary’s reward without knowing the states of the vehicle and the environment. Simulation studies and a robot demonstration are provided to validate the proposed framework’s performance.more » « less
- 
            Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. With a perturbation in the form of only black and white stickers, we attack a real stop sign, causing targeted misclassification in 100% of the images obtained in lab settings, and in 84.8% of the captured video frames obtained on a moving vehicle (field test) for the target classifier.more » « less
- 
            Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm, Robust Physical Perturbations (RP 2 ), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP 2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. With a perturbation in the form of only black and white stickers, we attack a real stop sign, causing targeted misclassification in 100% of the images obtained in lab settings, and in 84.8% of the captured video frames obtained on a moving vehicle (field test) for the target classifier.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    