skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 5:00 PM ET until 11:00 PM ET on Friday, June 21 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Alghamdi, Abdulmajeed"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Despite encryption, the packet size is still visible, enabling observers to infer private information in the Internet of Things (IoT) environment (e.g., IoT device identification). Packet padding obfuscates packet-length characteristics with a high data overhead because it relies on adding noise to the data. This paper proposes a more data-efficient approach that randomizes packet sizes without adding noise. We achieve this by splitting large TCP segments into random-sized chunks; hence, the packet length distribution is obfuscated without adding noise data. Our client–server implementation using TCP sockets demonstrates the feasibility of our approach at the application level. We realize our packet size control by adjusting two local socket-programming parameters. First, we enable the TCP_NODELAY option to send out each packet with our specified length. Second, we downsize the sending buffer to prevent the sender from pushing out more data than can be received, which could disable our control of the packet sizes. We simulate our defense on a network trace of four IoT devices and show a reduction in device classification accuracy from 98% to 63%, close to random guessing. Meanwhile, the real-world data transmission experiments show that the added latency is reasonable, less than 21%, while the added packet header overhead is only about 5%.

     
    more » « less
    Free, publicly-accessible full text available September 1, 2024
  2. Deep learning models have been used in creating various effective image classification applications. However, they are vulnerable to adversarial attacks that seek to misguide the models into predicting incorrect classes. Our study of major adversarial attack models shows that they all specifically target and exploit the neural networking structures in their designs. This understanding led us to develop a hypothesis that most classical machine learning models, such as random forest (RF), are immune to adversarial attack models because they do not rely on neural network design at all. Our experimental study of classical machine learning models against popular adversarial attacks supports this hypothesis. Based on this hypothesis, we propose a new adversarial-aware deep learning system by using a classical machine learning model as the secondary verification system to complement the primary deep learning model in image classification. Although the secondary classical machine learning model has less accurate output, it is only used for verification purposes, which does not impact the output accuracy of the primary deep learning model, and, at the same time, can effectively detect an adversarial attack when a clear mismatch occurs. Our experiments based on the CIFAR-100 dataset show that our proposed approach outperforms current state-of-the-art adversarial defense systems. 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  3. Social media nowadays has a direct impact on people's daily lives as many edge devices are available at our disposal and controlled by our fingertips. With such advancement in communication technology comes a rapid increase of disinformation in many kinds and shapes; faked images are one of the primary examples of misinformation media that can affect many users. Such activity can severely impact public behavior, attitude, and belief or sway the viewers' perception in any malicious or benign direction. Mitigating such disinformation over the Internet is becoming an issue with increasing interest from many aspects of our society, and effective authentication for detecting manipulated images has become extremely important. Perceptual hashing (pHash) is one of the effective techniques for detecting image manipulations. This paper develops a new and a robust pHash authentication approach to detect fake imagery on social media networks, choosing Facebook and Twitter as case studies. Our proposed pHash utilizes a self-supervised learning framework and contrastive loss. In addition, we develop a fake image sample generator in the pre-processing stage to cover the three most known image attacks (copy-move, splicing, and removal). The proposed authentication technique outperforms state-of-the-art pHash methods based on the SMPI dataset and other similar datasets that target one or more image attacks types. 
    more » « less