Smart water metering (SWM) infrastructure collects real-time water usage data that is useful for automated billing, leak detection, and forecasting of peak periods. Cyber/physical attacks can lead to data falsification on water usage data. This paper proposes a learning approach that converts smart water meter data into a Pythagorean mean-based invariant that is highly stable under normal conditions but deviates under attacks. We show how adversaries can launch deductive or camouflage attacks in the SWM infrastructure to gain benefits and impact the water distribution utility. Then, we apply a two-tier approach of stateless and stateful detection, reducing false alarms without significantly sacrificing the attack detection rate. We validate our approach using real-world water usage data of 92 households in Alicante, Spain for varying attack scales and strengths and prove that our method limits the impact of undetected attacks and expected time between consecutive false alarms. Our results show that even for low-strength, low-scale deductive attacks, the model limits the impact of an undetected attack to only 0.2199375 pounds and for high-strength, low-scale camouflage attack, the impact of an undetected attack was limited to 1.434375 pounds. 
                        more » 
                        « less   
                    
                            
                            Unsafe Events Detection in Smart Water Meter Infrastructure via Noise-Resilient Learning
                        
                    
    
            Residential smart water meters (SWMs) collect real-time water consumption data, enabling automated billing and peak period forecasting. The presence of unsafe events is typically detected via deviations from the benign profile of water usage. However, profiling the benign behavior is non-trivial for large-scale SWM networks because once deployed, the collected data already contain those events, biasing the benign profile. To address this challenge, we propose a real-time data-driven unsafe event detection framework for city-scale SWM networks that automatically learns the profile of benign behavior of water usage. Specifically, we first propose an optimal clustering of SWMs based on the recognition of residential similarity water usage to divide the SWM network infrastructure into clusters. Then we propose a mathematical invariant based on the absolute difference between two generalized means – one with positive and the other with negative order. Next, we propose a robust threshold learning approach utilizing a modified Hampel loss function that learns the robust detection thresholds even in the presence of unsafe events. Finally, we validated our proposed framework using a dataset of 1,099 SWMs over 2.5 years. Results show that our model detects unsafe events in the test set, even while learning from the training data with unlabeled unsafe events. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2030611
- PAR ID:
- 10541789
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 979-8-3503-6927-4
- Page Range / eLocation ID:
- 259 to 270
- Format(s):
- Medium: X
- Location:
- Hong Kong, Hong Kong
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Modern smart cities are focusing on smart transportation solutions to detect and mitigate the effects of various traffic incidents in the city. To materialize this, roadside units and ambient trans-portation sensors are being deployed to collect vehicular data that provides real-time traffic monitoring. In this paper, we first propose a real-time data-driven anomaly-based traffic incident detection framework for a city-scale smart transportation system. Specifically, we propose an incremental region growing approximation algorithm for optimal Spatio-temporal clustering of road segments and their data; such that road segments are strategically divided into highly correlated clusters. The highly correlated clusters enable identifying a Pythagorean Mean-based invariant as an anomaly detection metric that is highly stable under no incidents but shows a deviation in the presence of incidents. We learn the bounds of the invariants in a robust manner such that anomaly detection can generalize to unseen events, even when learning from real noisy data. We perform extensive experimental validation using mobility data collected from the City of Nashville, Tennessee, and prove that the method can detect incidents within each cluster in real-time.more » « less
- 
            Modern smart cities need smart transportation solutions to quickly detect various traffic emergencies and incidents in the city to avoid cascading traffic disruptions. To materialize this, roadside units and ambient transportation sensors are being deployed to collect speed data that enables the monitoring of traffic conditions on each road segment. In this paper, we first propose a scalable data-driven anomaly-based traffic incident detection framework for a city-scale smart transportation system. Specifically, we propose an incremental region growing approximation algorithm for optimal Spatio-temporal clustering of road segments and their data; such that road segments are strategically divided into highly correlated clusters. The highly correlated clusters enable identifying a Pythagorean Mean-based invariant as an anomaly detection metric that is highly stable under no incidents but shows a deviation in the presence of incidents. We learn the bounds of the invariants in a robust manner such that anomaly detection can generalize to unseen events, even when learning from real noisy data. Second, using cluster-level detection, we propose a folded Gaussian classifier to pinpoint the particular segment in a cluster where the incident happened in an automated manner. We perform extensive experimental validation using mobility data collected from four cities in Tennessee, compare with the state-of-the-art ML methods, to prove that our method can detect incidents within each cluster in real-time and outperforms known ML methods.more » « less
- 
            Recent years have seen the increasing attention and popularity of federated learning (FL), a distributed learning framework for privacy and data security. However, by its fundamental design, federated learning is inherently vulnerable to model poisoning attacks: a malicious client may submit the local updates to influence the weights of the global model. Therefore, detecting malicious clients against model poisoning attacks in federated learning is useful in safety-critical tasks.However, existing methods either fail to analyze potential malicious data or are computationally restrictive. To overcome these weaknesses, we propose a robust federated learning method where the central server learns a supervised anomaly detector using adversarial data generated from a variety of state-of-the-art poisoning attacks. The key idea of this powerful anomaly detector lies in a comprehensive understanding of the benign update through distinguishing it from the diverse malicious ones. The anomaly detector would then be leveraged in the process of federated learning to automate the removal of malicious updates (even from unforeseen attacks).Through extensive experiments, we demonstrate its effectiveness against backdoor attacks, where the attackers inject adversarial triggers such that the global model will make incorrect predictions on the poisoned samples. We have verified that our method can achieve 99.0% detection AUC scores while enjoying longevity as the model converges. Our method has also shown significant advantages over existing robust federated learning methods in all settings. Furthermore, our method can be easily generalized to incorporate newly-developed poisoning attacks, thus accommodating ever-changing adversarial learning environments.more » « less
- 
            Learning from Demonstration (LfD) is a powerful method for nonroboticists end-users to teach robots new tasks, enabling them to customize the robot behavior. However, modern LfD techniques do not explicitly synthesize safe robot behavior, which limits the deployability of these approaches in the real world. To enforce safety in LfD without relying on experts, we propose a new framework, ShiElding with Control barrier fUnctions in inverse REinforcement learning (SECURE), which learns a customized Control Barrier Function (CBF) from end-users that prevents robots from taking unsafe actions while imposing little interference with the task completion. We evaluate SECURE in three sets of experiments. First, we empirically validate SECURE learns a high-quality CBF from demonstrations and outperforms conventional LfD methods on simulated robotic and autonomous driving tasks with improvements on safety by up to 100%. Second, we demonstrate that roboticists can leverage SECURE to outperform conventional LfD approaches on a real-world knife-cutting, meal-preparation task by 12.5% in task completion while driving the number of safety violations to zero. Finally, we demonstrate in a user study that non-roboticists can use SECURE to efectively teach the robot safe policies that avoid collisions with the person and prevent cofee from spilling.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    