Named-Data Transport (NDT) is introduced to provide efficient content delivery by name over the existing IP Internet. NDT consists of the integration of three end-to-end architectural components: The first connection-free reliable transport protocol, the Named-Data Transport Protocol (NDTP); minor extensions to the Domain Name System (DNS) to include records containing manifests describing content; and transparent caches that track pending requests for content. NDT uses receiver-driven requests (Interests) to request content and NDT proxies that provide transparent caching of content while enforcing privacy. The performance of NDT, the Transmission Control Protocol (TCP), and Named-Data Networking (NDN) is compared using off-the-shelf implementations in the ns-3 simulator. The results demonstrate that NDT outperforms TCP and is as efficient as NDN, but without making any changes to the existing Internet routing infrastructure.
more »
« less
In the Line of Fire: Risks of DPI-triggered Data Collection
Cybersecurity companies routinely rely on telemetry from inside customer networks to collect intelligence about new online threats. However, the mechanism by which such intelligence is gathered can itself create new security risks. In this paper, we explore one such subtle situation that arises from an intelligence gathering feature present in FireEye's widely-deployed passive deep-packet inspection appliances. In particular, FireEye's systems will report back to the company Web requests containing particular content strings of interest. Based on these reports, the company then schedules independent requests for the same content using distributed Internet proxies. By broadly scanning the Internet using a known trigger string we are able to reverse engineer how these measurements work. We show that these side-effects provide a means to empirically establish which networks and network links are protected by such appliances. Further, we also show how to influence the associated proxies to issue requests to any URL.
more »
« less
- Award ID(s):
- 2152644
- PAR ID:
- 10505032
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the 16th Cyber Security Experimentation and Test Workshop
- ISBN:
- 9798400707889
- Page Range / eLocation ID:
- 57 to 63
- Format(s):
- Medium: X
- Location:
- Marina del Rey CA USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
With the increasing diversity of application needs (datacenters, IoT, content retrieval, industrial automation, etc.), new network architectures are continually being proposed to address specific and particular requirements. From a network management perspective, it is both important and challenging to enable evolution towards such new architectures. Given the ubiquity of the Internet, a clean-slate change of the entire infrastructure to a new architecture is impractical. It is believed that we will see new network architectures coming into existence with support for interoperability between separate architectural islands. We may have servers, and more importantly, content, residing in domains having different architectures. This paper presents COIN, a content-oriented interoperability framework for current and future Internet architectures. We seek to provide seamless connectivity and content accessibility across multiple of these network architectures, including the current Internet. COIN preserves each domain’s key architectural features and mechanisms while allowing flexibility for evolvability and extensibility. We focus on Information-Centric Networks (ICN), the prominent class of Future Internet architectures. COIN avoids expanding domain-specific protocols or namespaces. Instead, it uses an application-layer Object Resolution Service to deliver the right “foreign” names to consumers. COIN uses translation gateways that retain essential interoperability state, leverages encryption for confidentiality, and relies on domain-specific signatures to guarantee provenance and data integrity. Using NDN and MobilityFirst as important candidate solutions of ICN, and IP, we evaluate COIN. Measurements from an implementation of the gateways show that the overhead is manageable and scales well.more » « less
-
Reverse proxy servers play a critical role in optimizing Internet services, offering benefits ranging from load balancing to Denial of Service (DoS) protection. A known shortcoming of such proxies is that the backend server becomes oblivious to the IP address of the client who initiated the connection since all requests are forwarded by the proxy server. For HTTP, this issue is trivially solved by the X-Forwarded-For header, which allows the proxy server to pass to the backend server the IP address of the client that originated the request. Unfortunately, no such equivalent exists for many other protocols. To solve this issue, HAProxy created the PROXY protocol, which communicates client information from a proxy server to a backend server at a lower level in the network stack (Layer 4), making it protocol agnostic. In this work, we are the first to study the use of the PROXY protocol at Internet scale and investigate the security impact of its misconfigurations. We launched a measurement study on the full IPv4 address range and found that, over HTTP, more than 170,000 hosts accept PROXY protocol data from arbitrary sources. We demonstrate how to abuse this protocol to bypass onpath proxies (and their protections) and leak sensitive information from backend infrastructures. We discovered over 10,000 servers that are vulnerable to an access bypass, triggered by injecting a (spoofed) PROXY protocol header. Using this technique, we obtained access to over 500 internal servers providing control over IoT monitoring platforms and smart home automation devices, allowing us to, for example, regulate remote controlled window blinds or control security cameras and alarm systems. Beyond HTTP, we demonstrate how the PROXY protocol can be used to turn over 350 SMTP servers into open relays, enabling an attacker to send arbitrary emails from any email address. In sum, our study exposes how PROXY protocol misconfigurations lead to severe security issues that affect multiple protocols prominently used in the wild.more » « less
-
The goal of this short document is to explain why recent developments in the Internet's infrastructure are problematic. As context, we note that the Internet was originally designed to provide a simple universal service - global end-to-end packet delivery - on which a wide variety of end-user applications could be built. The early Internet supported this packet-delivery service via an interconnected collection of commercial Internet Service Providers (ISPs) that we will refer to collectively as the public Internet. The Internet has fulfilled its packet-delivery mission far beyond all expectations and is now the dominant global communications infrastructure. By providing a level playing field on which new applications could be deployed, the Internet has enabled a degree of innovation that no one could have foreseen. To improve performance for some common applications, enhancements such as caching (as in content-delivery networks) have been gradually added to the Internet. The resulting performance improvements are so significant that such enhancements are now effectively necessary to meet current content delivery demands. Despite these tangible benefits, this document argues that the way these enhancements are currently deployed seriously undermines the sustainability of the public Internet and could lead to an Internet infrastructure that reaches fewer people and is largely concentrated among only a few large-scale providers. We wrote this document because we fear that these developments are now decidedly tipping the Internet's playing field towards those who can deploy these enhancements at massive scale, which in turn will limit the degree to which the future Internet can support unfettered innovation. This document begins by explaining our concerns but goes on to articulate how this unfortunate fate can be avoided. To provide more depth for those who seek it, we provide a separate addendum with further detail.more » « less
-
It has been long observed that communication between a client and a content server using overlay detours may result in substantially better performance than a native path offered by IP routing. Yet the use of detours has been limited to distributed platforms such as Akamai. This paper poses a question - how can clients practically take advantage of overlay detours without modification to content servers (which are obviously outside clients' control)? We have posited elsewhere that the emergence of gigabit-to-the-home access networks would precipitate a new home network appliance, which would maintain permanent presence on the Internet for the users and have general computing and storage capabilities. Given such an appliance, our vision is that Internet users may form cooperatives in which members agree to serve as waypoints points to each other to improve each other's Internet experience. To make detours transparent to the server, we leverage MPTCP, which normally allows a device to communicate with the server on several network interfaces in parallel but we use it to communicate through external waypoint hosts. The waypoints then mimic MPTCP's subflows to the server, making the server oblivious to the overlay detours as long as it supports MPTCP.more » « less
An official website of the United States government

