Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
A vehicular communication network allows vehicles on the road to be connected by wireless links, providing road safety in vehicular environments. Vehicular communication network is vulnerable to various types of attacks. Cryptographic techniques are used to prevent attacks such as message modification or vehicle impersonation. However, cryptographic techniques are not enough to protect against insider attacks where an attacking vehicle has already been authenticated in the network. Vehicular network safety services rely on periodic broadcasts of basic safety messages (BSMs) from vehicles in the network that contain important information about the vehicles such as position, speed, received signal strength (RSSI) etc. Malicious vehicles can inject false position information in a BSM to commit a position falsification attack which is one of the most dangerous insider attacks in vehicular networks. Position falsification attacks can lead to traffic jams or accidents given false position information from vehicles in the network. A misbehavior detection system (MDS) is an efficient way to detect such attacks and mitigate their impact. Existing MDSs require a large amount of features which increases the computational complexity to detect these attacks. In this paper, we propose a novel grid-based misbehavior detection system which utilizes the position information from the BSMs. Our model is tested on a publicly available dataset and is applied using five classification algorithms based on supervised learning. Our model performs multi-classification and is found to be superior compared to other existing methods that deal with position falsification attacks.more » « less
-
Motivated by the ever-increasing concerns on personal data privacy and the rapidly growing data volume at local clients, federated learning (FL) has emerged as a new machine learning setting. An FL system is comprised of a central parameter server and multiple local clients. It keeps data at local clients and learns a centralized model by sharing the model parameters learned locally. No local data needs to be shared, and privacy can be well protected. Nevertheless, since it is the model instead of the raw data that is shared, the system can be exposed to the poisoning model attacks launched by malicious clients. Furthermore, it is challenging to identify malicious clients since no local client data is available on the server. Besides, membership inference attacks can still be performed by using the uploaded model to estimate the client's local data, leading to privacy disclosure. In this work, we first propose a model update based federated averaging algorithm to defend against Byzantine attacks such as additive noise attacks and sign-flipping attacks. The individual client model initialization method is presented to provide further privacy protections from the membership inference attacks by hiding the individual local machine learning model. When combining these two schemes, privacy and security can be both effectively enhanced. The proposed schemes are proved to converge experimentally under non-lID data distribution when there are no attacks. Under Byzantine attacks, the proposed schemes perform much better than the classical model based FedAvg algorithm.more » « less