skip to main content

Search for: All records

Creators/Authors contains: "Traynor, Patrick"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Stalkerware is a form of malware that allows for the abusive monitoring of intimate partners. Primarily deployed on information-rich mobile platforms, these malicious applications allow for collecting information about a victim’s actions and behaviors, including location data, call audio, text messages, photos, and other personal details. While stalkerware has received increased attention from the security community, the ways in which stalkerware authors monetize their efforts have not been explored in depth. This paper represents the first large-scale technical analysis of monetization within the stalkerware ecosystem. We analyze the code base of 6,432 applications collected by the Coalition Against Stalkerware to determine their monetization strategies. We find that while far fewer stalkerware apps use ad libraries than normal apps, 99% of those that do use Google AdMob. We also find that payment services range from traditional in-app billing to cryptocurrency. Finally, we demonstrate that Google’s recent change to their Terms of Service (ToS) did not eliminate these applications, but instead caused a shift to other payment processors, while the apps can still be found on the Play Store; we verify through emulation that these apps often operate in blatant contravention of the ToS. Through this analysis, we find that the heterogeneitymore »of markets and payment processors means that while point solutions can have impact on monetization, a multi-pronged solution involving multiple stakeholders is necessary to mitigate the financial incentive for developing stalkerware.« less
    Free, publicly-accessible full text available October 1, 2023
  2. The targeted transferability of adversarial samples enables attackers to exploit black-box models in the real-world. The most popular method to produce these adversarial samples is optimization attacks, which have been shown to achieve a high level of transferability in some domains. However, recent research has demonstrated that these attack samples fail to transfer when applied to Automatic Speech Recognition Systems (ASRs). In this paper, we investigate factors preventing this transferability via exhaustive experimentation. To do so, we perform an ablation study on each stage of the ASR pipeline. We discover and quantify six factors (i.e., input type, MFCC, RNN, output type, and vocabulary and sequence sizes) that impact the targeted transferability of optimization attacks against ASRs. Future research can leverage our findings to build ASRs that are more robust to other transferable attack types (e.g., signal processing attacks), or to modify architectures in other domains to reduce their exposure to targeted transferability of optimization attacks.
  3. Localization is one form of cooperative spectrum sensing that lets multiple sensors work together to estimate the location of a target transmitter. However, the requisite exchange of spectrum measurements leads to exposure of the physical loca- tion of participating sensors. Furthermore, in some cases, a com- promised participant can reveal the sensitive characteristics of all participants. Accordingly, a lack of sufficient guarantees about data handling discourages such devices from working together. In this paper, we provide the missing data protections by processing spectrum measurements within attestable containers or enclaves. Enclaves provide runtime memory integrity and confidentiality using hardware extensions and have been used to secure various applications [1]–[8]. We use these enclave features as building blocks for new privacy-preserving particle filter protocols that minimize disruption of the spectrum sensing ecosystem. We then instantiate this enclave using ARM TrustZone and Intel SGX, and we show that enclave-based particle filter protocols incur minimal overhead (adding 16 milliseconds of processing to the measurement processing function when using SGX versus unprotected computation) and can be deployed on resource-constrained platforms that support TrustZone (incurring only a 1.01x increase in processing time when doubling particle count from 10,000 to 20,000), whereas cryptographically-based approaches suffer from multiplemore »orders of magnitude higher costs. We effectively deploy enclaves in a distributed environment, dramatically improving current data handling techniques. To our best knowledge, this is the first work to demonstrate privacy-preserving localization in a multi-party environment with reasonable overhead.« less
  4. Speech and speaker recognition systems are employed in a variety of applications, from personal assistants to telephony surveillance and biometric authentication. The wide deployment of these systems has been made possible by the improved accuracy in neural networks. Like other systems based on neural networks, recent research has demonstrated that speech and speaker recognition systems are vulnerable to attacks using manipulated inputs. However, as we demonstrate in this paper, the end-to-end architecture of speech and speaker systems and the nature of their inputs make attacks and defenses against them substantially different than those in the image space. We demonstrate this first by systematizing existing research in this space and providing a taxonomy through which the community can evaluate future work. We then demonstrate experimentally that attacks against these models almost universally fail to transfer. In so doing, we argue that substantial additional work is required to provide adequate mitigations in this space.
  5. Automatic speech recognition and voice identification systems are being deployed in a wide array of applications, from providing control mechanisms to devices lacking traditional interfaces, to the automatic transcription of conversations and authentication of users. Many of these applications have significant security and privacy considerations. We develop attacks that force mistranscription and misidentification in state of the art systems, with minimal impact on human comprehension. Processing pipelines for modern systems are comprised of signal preprocessing and feature extraction steps, whose output is fed to a machine-learned model. Prior work has focused on the models, using white-box knowledge to tailor model-specific attacks. We focus on the pipeline stages before the models, which (unlike the models) are quite similar across systems. As such, our attacks are black-box, transferable, can be tuned to require zero queries to the target, and demonstrably achieve mistranscription and misidentification rates as high as 100% by modifying only a few frames of audio. We perform a study via Amazon Mechanical Turk demonstrating that there is no statistically significant difference between human perception of regular and perturbed audio. Our findings suggest that models may learn aspects of speech that are generally not perceived by human subjects, but that aremore »crucial for model accuracy.« less
  6. Automatic Speech Recognition (ASR) systems convert speech into text and can be placed into two broad categories: traditional and fully end-to-end. Both types have been shown to be vulnerable to adversarial audio examples that sound benign to the human ear but force the ASR to produce malicious transcriptions. Of these attacks, only the "psychoacoustic" attacks can create examples with relatively imperceptible perturbations, as they leverage the knowledge of the human auditory system. Unfortunately, existing psychoacoustic attacks can only be applied against traditional models, and are obsolete against the newer, fully end-to-end ASRs. In this paper, we propose an equalization-based psychoacoustic attack that can exploit both traditional and fully end-to-end ASRs. We successfully demonstrate our attack against real-world ASRs that include DeepSpeech and Wav2Letter. Moreover, we employ a user study to verify that our method creates low audible distortion. Specifically, 80 of the 100 participants voted in favor of all our attack audio samples as less noisier than the existing state-of-the-art attack. Through this, we demonstrate both types of existing ASR pipelines can be exploited with minimum degradation to attack audio quality.
  7. With close to native performance, Linux containers are becoming the de facto platform for cloud computing. While various solutions have been proposed to secure applications and containers in the cloud environment by leveraging Intel SGX, most cloud operators do not yet offer SGX as a service. This is likely due to a number of security, scalability, and usability concerns coming from both cloud providers and users. Cloud operators worry about the security guarantees of unofficial SDKs, limited support for remote attestation within containers, limited physical memory for the Enclave Page Cache (EPC) making it difficult to support hundreds of enclaves, and potential DoS attacks against EPC by malicious users. Meanwhile, end users need to worry about careful program partitioning to reduce the TCB and adapting legacy applications to use SGX. We note that most of these concerns are the result of an incomplete infrastructure, from the OS to the application layer. We address these concerns with lxcsgx, which allows SGX applications to run inside containers while also: enabling SGX remote attestation for containerized applications, enforcing EPC memory usage control on a per-container basis, providing a general software TPM using SGX to augment legacy applications, and supporting partitioning with a GCCmore »plugin. We then retrofit Nginx/OpenSSL and Memcached using the software TPM and SGX partitioning to defend against known and potential attacks. Thanks to the small EPC footprint of each enclave, we are able to run up to 100 containerized Memcached instances without EPC swapping. Our evaluation shows the overhead introduced by lxcsgx is less than 6.9% for simple SGX applications, 9.5% for Nginx/OpenSSL, and 20.9% for containerized Memcached.« less
  8. A protocol for two-party secure function evaluation (2P-SFE) aims to allow the parties to learn the output of function f of their private inputs, while leaking nothing more. In a sense, such a protocol realizes a trusted oracle that computes f and returns the result to both parties. There have been tremendous strides in efficiency over the past ten years, yet 2P-SFE protocols remain impractical for most real-time, online computations, particularly on modestly provisioned devices. Intel's Software Guard Extensions (SGX) provides hardware-protected execution environments, called enclaves, that may be viewed as trusted computation oracles. While SGX provides native CPU speed for secure computation, previous side-channel and micro-architecture attacks have demonstrated how security guarantees of enclaves can be compromised. In this paper, we explore a balanced approach to 2P-SFE on SGX-enabled processors by constructing a protocol for evaluating f relative to a partitioning of f. This approach alleviates the burden of trust on the enclave by allowing the protocol designer to choose which components should be evaluated within the enclave, and which via standard cryptographic techniques. We describe SGX-enabled SFE protocols (modeling the enclave as an oracle), and formalize the strongest-possible notion of 2P-SFE for our setting. We prove our protocolmore »meets this notion when properly realized. We implement the protocol and apply it to two practical problems: privacy-preserving queries to a database, and a version of Dijkstra's algorithm for privacy-preserving navigation. Our evaluation shows that our SGX-enabled SFE scheme enjoys a 38x increase in performance over garbled-circuit-based SFE. Finally, we justify modeling of the enclave as an oracle by implementing protections against known side-channels.« less