A new image steganography method is proposed for data hiding. This method uses least significant bit (LSB) insertion to hide a message in one of the facial features of a given image. The proposed technique chooses an image of a face from a dataset of 8-bit color images of head poses and performs facial recognition on the image to extract the Cartesian coordinates of the eyes, mouth, and nose. A facial feature is chosen at random and each bit of the binary representation of the message is hidden at the least significant bit in the pixels of the chosen facial feature. © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
more »
« less
Creating Simple Adversarial Examples for Speech Recognition Deep Neural Networks
The use of deep neural networks for speech recognition and recognizing speech commands continues to grow. This necessitates an understanding of the security risks that goes along with this technology. This paper analyzes the ability to interfere with the performance of neural networks for speech pattern recognition. With the methods proposed herein, it is a simple matter to create adversarial data by overlaying audio of a command at a fairly unnoticeable amplitude. This causes the neural network to lose around 20% accuracy and misidentify commands for other commands with an average to high confidence value. Such an attack is virtually undetectable to the human ear. © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
more »
« less
- Award ID(s):
- 1757659
- PAR ID:
- 10156514
- Date Published:
- Journal Name:
- Proceedings of the 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW)
- Page Range / eLocation ID:
- 58 to 62
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)When a cyberattack occurs, tracking the attack back to an actual individual person can be problematic. Even identifying the workstation or device that it originated from does not necessarily identify the attacker, as the attacking device could, itself, be compromised. A system to determine whether activities that occur are from the same user or not facilitates forensic analysis as well as the detection of concurrent attacks from different devices by the same user. This paper proposes a system for identifying attackers based on behaviors expressed via their use of the Bash command line interface, the most common shell on Linux distributions. Prior systems were limited by issues such as requiring labeled user data which is difficult to acquire or not being specific enough to monitor individual persons. The approach proposed herein does not require labeled data and is specific enough to target individual users. The proposed system analyzes the level of variance between commands used and calculates an anomaly score for each given command. It uses these anomaly scores to compare Bash history sets together to identify if they were created by the same user. © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.more » « less
-
An adversarial attack is an exploitative process in which minute alterations are made to natural inputs, causing the inputs to be misclassified by neural models. In the field of speech recognition, this has become an issue of increasing significance. Although adversarial attacks were originally introduced in computer vision, they have since infiltrated the realm of speech recognition. In 2017, a genetic attack was shown to be quite potent against the Speech Commands Model. Limited-vocabulary speech classifiers, such as the Speech Commands Model, are used in a variety of applications, particularly in telephony; as such, adversarial examples produced by this attack pose as a major security threat. This paper explores various methods of detecting these adversarial examples with combinations of audio preprocessing. One particular combined defense incorporating compressions, speech coding, filtering, and audio panning was shown to be quite effective against the attack on the Speech Commands Model, detecting audio adversarial examples with 93.5% precision and 91.2% recall.more » « less
-
News articles that are written with an intent to deliberately deceive or manipulate readers are inherently problematic. These so-called 'fake news' articles are believed to have contributed to election manipulation and even resulted in severe injury and death, by actions that they have triggered. Identifying intentionally deceptive and manipulative news article and alerting human readers is key to mitigating the damage that they can produce. The dataset presented in this paper includes manually identified and classified news stories that can be used for the training and testing of classification systems that identify legitimate versus fake and manipulative news stories. © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.more » « less
-
Anti-drone technologies that attack drone clusters or swarms autonomous command technologies may need to identify the type of command system being utilized and the various roles of particular UAVs within the system. This paper presents a set of algorithms to identify what swarm command method is being used and the role of particular drones within a swarm or cluster of UAVs utilizing only passive sensing techniques (which cannot be detected). A testing configuration for validating the algorithms is also discussed. © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.more » « less
An official website of the United States government

