skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Image Authentication Using Self-Supervised Learning to Detect Manipulation Over Social Network Platforms
Social media nowadays has a direct impact on people's daily lives as many edge devices are available at our disposal and controlled by our fingertips. With such advancement in communication technology comes a rapid increase of disinformation in many kinds and shapes; faked images are one of the primary examples of misinformation media that can affect many users. Such activity can severely impact public behavior, attitude, and belief or sway the viewers' perception in any malicious or benign direction. Mitigating such disinformation over the Internet is becoming an issue with increasing interest from many aspects of our society, and effective authentication for detecting manipulated images has become extremely important. Perceptual hashing (pHash) is one of the effective techniques for detecting image manipulations. This paper develops a new and a robust pHash authentication approach to detect fake imagery on social media networks, choosing Facebook and Twitter as case studies. Our proposed pHash utilizes a self-supervised learning framework and contrastive loss. In addition, we develop a fake image sample generator in the pre-processing stage to cover the three most known image attacks (copy-move, splicing, and removal). The proposed authentication technique outperforms state-of-the-art pHash methods based on the SMPI dataset and other similar datasets that target one or more image attacks types.  more » « less
Award ID(s):
1915780 2325452
PAR ID:
10487297
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM)
ISBN:
978-1-6654-8534-0
Page Range / eLocation ID:
672 to 678
Format(s):
Medium: X
Location:
Rockville, MD, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Wysocki, Bryant T.; Holt, James; Blowers, Misty (Ed.)
    Ever since human society entered the age of social media, every user has had a considerable amount of visual content stored online and shared in variant virtual communities. As an efficient information circulation measure, disastrous consequences are possible if the contents of images are tampered with by malicious actors. Specifically, we are witnessing the rapid development of machine learning (ML) based tools like DeepFake apps. They are capable of exploiting images on social media platforms to mimic a potential victim without their knowledge or consent. These content manipulation attacks can lead to the rapid spread of misinformation that may not only mislead friends or family members but also has the potential to cause chaos in public domains. Therefore, robust image authentication is critical to detect and filter off manipulated images. In this paper, we introduce a system that accurately AUthenticates SOcial MEdia images (AUSOME) uploaded to online platforms leveraging spectral analysis and ML. Images from DALL-E 2 are compared with genuine images from the Stanford image dataset. Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) are used to perform a spectral comparison. Additionally, based on the differences in their frequency response, an ML model is proposed to classify social media images as genuine or AI-generated. Using real-world scenarios, the AUSOME system is evaluated on its detection accuracy. The experimental results are encouraging and they verified the potential of the AUSOME scheme in social media image authentications. 
    more » « less
  2. Budak, Ceren; Cha, Meeyoung; Quercia, Daniele; Xie, Lexing (Ed.)
    Despite the influence that image-based communication has on online discourse, the role played by images in disinformation is still not well understood. In this paper, we present the first large-scale study of fauxtography, analyzing the use of manipulated or misleading images in news discussion on online communities. First, we develop a computational pipeline geared to detect fauxtography, and identify over 61k instances of fauxtography discussed on Twitter, 4chan, and Reddit. Then, we study how posting fauxtography affects engagement of posts on social media, finding that posts containing it receive more interactions in the form of re-shares, likes, and comments. Finally, we show that fauxtography images are often turned into memes by Web communities. Our findings show that effective mitigation against disinformation need to take images into account, and highlight a number of challenges in dealing with image-based disinformation. 
    more » « less
  3. Deepfakes represent the generation of synthetic/fake images or videos using deep neural networks. As the techniques used for the generation of deepfakes are improving, the threats including social media disinformation, defamation, impersonation, and fraud are becoming more prevalent. The existing deepfakes detection models, including those that use convolution neural networks, do not generalize well when subjected to multiple deepfakes generation techniques and cross-corpora setting. Therefore, there is a need for the development of effective and efficient deepfakes detection methods. To explicitly model part-whole hierarchical relationships by using groups of neurons to encode visual entities and learn the relationships between real and fake artifacts, we propose a novel deep learning model efficient-capsule network (E-Cap Net) for classifying the facial images generated through different deepfakes generative techniques. More specifically, we introduce a low-cost max-feature-map (MFM) activation function in each primary capsule of our proposed E-Cap Net. The use of MFM activation enables our E-Cap Net to become light and robust as it suppresses the low activation neurons in each primary capsule. Performance of our approach is evaluated on two standard, largescale and diverse datasets i.e., Diverse Fake Face Dataset (DFFD) and FaceForensics++ (FF++), and also on the World Leaders Dataset (WLRD). Moreover, we also performed a cross-corpora evaluation to show the generalizability of our method for reliable deepfakes detection. The AUC of 99.99% on DFFD, 99.52% on FF++, and 98.31% on WLRD datasets indicate the effectiveness of our method for detecting the manipulated facial images generated via different deepfakes techniques. 
    more » « less
  4. With rich visual data, such as images, becoming readily associated with items, visually-aware recommendation systems (VARS) have been widely used in different applications. Recent studies have shown that VARS are vulnerable to item-image adversarial attacks, which add human-imperceptible perturbations to the clean images associated with those items. Attacks on VARS pose new security challenges to a wide range of applications, such as e-commerce and social media, where VARS are widely used. How to secure VARS from such adversarial attacks becomes a critical problem. Currently, there is still a lack of systematic studies on how to design defense strategies against visual attacks on VARS. In this article, we attempt to fill this gap by proposing anadversarial image denoising and detectionframework to secure VARS. Our proposed method can simultaneously (1) secure VARS from adversarial attacks characterized bylocalperturbations by image denoising based onglobalvision transformers; and (2) accurately detect adversarial examples using a novel contrastive learning approach. Meanwhile, our framework is designed to be used as both a filter and a detector so that they can bejointlytrained to improve the flexibility of our defense strategy to a variety of attacks and VARS models. Our approach is uniquely tailored for VARS, addressing the distinct challenges in scenarios where adversarial attacks can differ across industries, for instance, causing misclassification in e-commerce or misrepresentation in real estate. We have conducted extensive experimental studies with two popular attack methods (FGSM and PGD). Our experimental results on two real-world datasets show that our defense strategy against visual attacks is effective and outperforms existing methods on different attacks. Moreover, our method demonstrates high accuracy in detecting adversarial examples, complementing its robustness across various types of adversarial attacks. 
    more » « less
  5. null (Ed.)
    Blockchain technology has recently gained high popularity in data security, primarily to mitigate against data breach and manipulation. Since its inception in 2008, it has been applied in different areas mainly to maintain data integrity and consistency. Blockchain has been tailored to secure data due to its data immutability and distributive technology. Despite the high success rate in data security, the inability to identify compromised insider nodes is one of the significant problems encountered in blockchain architectures. A Blockchain network is made up of nodes that initiate, verify and validate transactions. If compromised, these nodes can manipulate submitted transactions, inject fake transactions, or retrieve unauthorized information that might eventually compromise the stored data's integrity and consistency. This paper proposes a novel method of detecting these compromised blockchain nodes using a server-side authentication process and thwart their activities before getting updated in the blockchain ledger. In evaluating the proposed system, we perform four common insider attacks, which fall under the following three categories: (1)Those attacks targeting the Blockchain to bring it down. (2) the attacks that attempt to inject fake data into the database. (3) The attacks that attempt to hijack or retrieve unauthorized data. We described how we implement the attacks and how our architecture detects them before they impact the network. Finally, we displayed the attack detection time for each attack and compared our approach with other existing methods. 
    more » « less