skip to main content


Title: In the Line of Fire: Risks of DPI-triggered Data Collection
Cybersecurity companies routinely rely on telemetry from inside customer networks to collect intelligence about new online threats. However, the mechanism by which such intelligence is gathered can itself create new security risks. In this paper, we explore one such subtle situation that arises from an intelligence gathering feature present in FireEye's widely-deployed passive deep-packet inspection appliances. In particular, FireEye's systems will report back to the company Web requests containing particular content strings of interest. Based on these reports, the company then schedules independent requests for the same content using distributed Internet proxies. By broadly scanning the Internet using a known trigger string we are able to reverse engineer how these measurements work. We show that these side-effects provide a means to empirically establish which networks and network links are protected by such appliances. Further, we also show how to influence the associated proxies to issue requests to any URL.  more » « less
Award ID(s):
2152644
PAR ID:
10505032
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
Proceedings of the 16th Cyber Security Experimentation and Test Workshop
ISBN:
9798400707889
Page Range / eLocation ID:
57 to 63
Format(s):
Medium: X
Location:
Marina del Rey CA USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Named-Data Transport (NDT) is introduced to provide efficient content delivery by name over the existing IP Internet. NDT consists of the integration of three end-to-end architectural components: The first connection-free reliable transport protocol, the Named-Data Transport Protocol (NDTP); minor extensions to the Domain Name System (DNS) to include records containing manifests describing content; and transparent caches that track pending requests for content. NDT uses receiver-driven requests (Interests) to request content and NDT proxies that provide transparent caching of content while enforcing privacy. The performance of NDT, the Transmission Control Protocol (TCP), and Named-Data Networking (NDN) is compared using off-the-shelf implementations in the ns-3 simulator. The results demonstrate that NDT outperforms TCP and is as efficient as NDN, but without making any changes to the existing Internet routing infrastructure. 
    more » « less
  2. With the increasing diversity of application needs (datacenters, IoT, content retrieval, industrial automation, etc.), new network architectures are continually being proposed to address specific and particular requirements. From a network management perspective, it is both important and challenging to enable evolution towards such new architectures. Given the ubiquity of the Internet, a clean-slate change of the entire infrastructure to a new architecture is impractical. It is believed that we will see new network architectures coming into existence with support for interoperability between separate architectural islands. We may have servers, and more importantly, content, residing in domains having different architectures. This paper presents COIN, a content-oriented interoperability framework for current and future Internet architectures. We seek to provide seamless connectivity and content accessibility across multiple of these network architectures, including the current Internet. COIN preserves each domain’s key architectural features and mechanisms while allowing flexibility for evolvability and extensibility. We focus on Information-Centric Networks (ICN), the prominent class of Future Internet architectures. COIN avoids expanding domain-specific protocols or namespaces. Instead, it uses an application-layer Object Resolution Service to deliver the right “foreign” names to consumers. COIN uses translation gateways that retain essential interoperability state, leverages encryption for confidentiality, and relies on domain-specific signatures to guarantee provenance and data integrity. Using NDN and MobilityFirst as important candidate solutions of ICN, and IP, we evaluate COIN. Measurements from an implementation of the gateways show that the overhead is manageable and scales well. 
    more » « less
  3. The goal of this short document is to explain why recent developments in the Internet's infrastructure are problematic. As context, we note that the Internet was originally designed to provide a simple universal service - global end-to-end packet delivery - on which a wide variety of end-user applications could be built. The early Internet supported this packet-delivery service via an interconnected collection of commercial Internet Service Providers (ISPs) that we will refer to collectively as the public Internet. The Internet has fulfilled its packet-delivery mission far beyond all expectations and is now the dominant global communications infrastructure. By providing a level playing field on which new applications could be deployed, the Internet has enabled a degree of innovation that no one could have foreseen. To improve performance for some common applications, enhancements such as caching (as in content-delivery networks) have been gradually added to the Internet. The resulting performance improvements are so significant that such enhancements are now effectively necessary to meet current content delivery demands. Despite these tangible benefits, this document argues that the way these enhancements are currently deployed seriously undermines the sustainability of the public Internet and could lead to an Internet infrastructure that reaches fewer people and is largely concentrated among only a few large-scale providers. We wrote this document because we fear that these developments are now decidedly tipping the Internet's playing field towards those who can deploy these enhancements at massive scale, which in turn will limit the degree to which the future Internet can support unfettered innovation. This document begins by explaining our concerns but goes on to articulate how this unfortunate fate can be avoided. To provide more depth for those who seek it, we provide a separate addendum with further detail.

     
    more » « less
  4. It has been long observed that communication between a client and a content server using overlay detours may result in substantially better performance than a native path offered by IP routing. Yet the use of detours has been limited to distributed platforms such as Akamai. This paper poses a question - how can clients practically take advantage of overlay detours without modification to content servers (which are obviously outside clients' control)? We have posited elsewhere that the emergence of gigabit-to-the-home access networks would precipitate a new home network appliance, which would maintain permanent presence on the Internet for the users and have general computing and storage capabilities. Given such an appliance, our vision is that Internet users may form cooperatives in which members agree to serve as waypoints points to each other to improve each other's Internet experience. To make detours transparent to the server, we leverage MPTCP, which normally allows a device to communicate with the server on several network interfaces in parallel but we use it to communicate through external waypoint hosts. The waypoints then mimic MPTCP's subflows to the server, making the server oblivious to the overlay detours as long as it supports MPTCP. 
    more » « less
  5. A variety of proxies have been developed to reconstruct paleo‐CO2from fossil leaves. These proxies rely on some combination of stomatal morphology, leafδ13C, and leaf gas exchange. A common conceptual framework for evaluating these proxies is lacking, which has hampered efforts for inter‐comparison. Here we develop such a framework, based on the underlying physics and biochemistry. From this conceptual framework, we find that the more extensively parameterised proxies, such as the optimisation model, are likely to be the most robust. The simpler proxies, such as the stomatal ratio model, tend to under‐predict CO2, especially in warm (>15°C) and moist (>50%humidity) environments. This identification of a structural under‐prediction may help to explain the common observation that the simpler proxies often produce estimates of paleo‐CO2that are lower than those from the more complex proxies and other, non‐leaf‐based CO2proxies. The use of extensively parameterised models is not always possible, depending on the preservation state of the fossils and the state of knowledge about the fossil's nearest living relative. With this caveat in mind, our analysis highlights the value of using the most complex leaf‐based model as possible.

     
    more » « less