skip to main content

Title: Virtual Wires: Rethinking WiFi networks
WiFi is the dominant means for home Internet access, yet is frequently a performance bottleneck. Without reliable, satisfactory performance at the last hop, end-to-end quality of service (QoS) efforts will fail. Three major reasons for WiFi bottlenecking performance are its: 1) inherent wireless channel characteristics, 2) approach to access control of the shared broadcast channel, and 3) impact on transport layer protocols, such as TCP, that operate end-to-end, and over-react to the loss or delay caused by the single WiFi link. In this paper, we leverage the philosophy of centralization in modern networking and present our cross layer design to address the problem. Specifically, we introduce centralized control at the point of entry/egress into the WiFi network. Based on network conditions measured from buffer sizes, airtime and throughput, flows are scheduled to the optimal utility. Unlike most existing WiFi QoS approaches, {\em our design only relies on transparent modifications, requiring no changes to the network (including link layer) protocols, applications, or user intervention}. Through extensive experimental investigation, we show that our design significantly enhances the reliability and predictability of WiFi performance, providing a ``virtual wire''-like link to the targeted application.
Authors:
; ; ;
Award ID(s):
1717867
Publication Date:
NSF-PAR ID:
10094954
Journal Name:
IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN)
Sponsoring Org:
National Science Foundation
More Like this
  1. When people connect to the Internet with their mobile devices, they do not often think about the security of their data; however, the prevalence of rogue access points has taken advantage of a false sense of safety in unsuspecting victims. This paper analyzes the methods an attacker would use to create rogue WiFi access points using software-defined radio (SDR). To construct a rogue access point, a few essential layers of WiFi need simulation: the physical layer, link layer, network layer, and transport layer. Radio waves carrying WiFi packets, transmitted between two Universal Software Radio Peripherals (USRPs), emulate the physical layer. The link layer consists of the connection between those same USRPs communicating directly to each other, and the network layer expands on this communication by using the network tunneling/network tapping (TUN/TAP) interfaces to tunnel IP packets between the host and the access point. Finally, the establishment of the transport layer constitutes transceiving the packets that pass through the USRPs. In the end, we found that creating a rogue access point and capturing the stream of data from a fabricated "victim" on the Internet was effective and cheap with SDRs as inexpensive as $20 USD. Our work aims to expose howmore »a cybercriminal could carry out an attack like this in order to prevent and defend against them in the future.« less
  2. Traditional end-host network stacks are struggling to keep up with rapidly increasing datacenter access link bandwidths due to their unsustainable CPU overheads. Motivated by this, our community is exploring a multitude of solutions for future network stacks: from Linux kernel optimizations to partial hardware o!oad to clean-slate userspace stacks to specialized host network hardware. The design space explored by these solutions would bene"t from a detailed understanding of CPU ine#ciencies in existing network stacks. This paper presents measurement and insights for Linux kernel network stack performance for 100Gbps access link bandwidths. Our study reveals that such high bandwidth links, coupled with relatively stagnant technology trends for other host resources (e.g., core speeds and count, cache sizes, NIC bu$er sizes, etc.), mark a fundamental shift in host network stack bottlenecks. For instance, we "nd that a single core is no longer able to process packets at line rate, with data copy from kernel to application bu$ers at the receiver becoming the core performance bottleneck. In addition, increase in bandwidth-delay products have outpaced the increase in cache sizes, resulting in ine#cient DMA pipeline between the NIC and the CPU. Finally, we "nd that traditional loosely-coupled design of network stack and CPU schedulersmore »in existing operating systems becomes a limiting factor in scaling network stack performance across cores. Based on insights from our study, we discuss implications to design of future operating systems, network protocols, and host hardware.« less
  3. The rapid growth of mobile data traffic is straining cellular networks. A natural approach to alleviate cellular networks congestion is to use, in addition to the cellular interface, secondary interfaces such as WiFi, Dynamic spectrum and mmWave to aid cellular networks in handling mobile traffic. The fundamental question now becomes: How should traffic be distributed over different interfaces, taking into account different application QoS requirements and the diverse nature of radio interfaces. To this end, we propose the Discounted Rate Utility Maximization (DRUM) framework with interface costs as a means to quantify application preferences in terms of throughput, delay, and cost. The flow rate allocation problem can be formulated as a convex optimization problem. However, solving this problem requires non-causal knowledge of the time-varying capacities of all radio interfaces. To this end, we propose an online predictive algorithm that exploits the predictability of wireless connectivity for a small look-ahead window w. We show that, under some mild conditions, the proposed algorithm achieves a constant competitive ratio independent of the time horizon T. Furthermore, the competitive ratio approaches 1 as the prediction window increases. We also propose another predictive algorithm based on the "Receding Horizon Control" principle from control theory thatmore »performs very well in practice. Numerical simulations serve to validate our formulation, by showing that under the DRUM framework: the more delay-tolerant the flow, the less it uses the cellular network, preferring to transmit in high rate bursts over the secondary interfaces. Conversely, delay-sensitive flows consistently transmit irrespective of different interfaces' availability. Simulations also show that the proposed online predictive algorithms have a near-optimal performance compared to the offline prescient solution under all considered scenarios.« less
  4. Network quality-of-service (QoS) does not always translate to user quality-of-experience (QoE). Consequently, knowledge of user QoE is desirable in several scenarios that have traditionally operated on QoS information. Examples include traffic management by ISPs and resource allocation by the operating system. But today these systems lack ways to measure user QoE. To help address this problem, we propose offline generation of per-app models mapping app-independent QoS metrics to app-specific QoE metrics. This enables any entity that can observe an app's network traffic-including ISPs and access points-to infer the app's QoE. We describe how to generate such models for many diverse apps with significantly different QoE metrics. We generate models for common user interactions of 60 popular apps. We then demonstrate the utility of these models by implementing a QoE-aware traffic management framework and evaluate it on a WiFi access point. Our approach successfully improves QoE metrics that reflect user-perceived performance. First, we demonstrate that prioritizing traffic for latency-sensitive apps can improve responsiveness and video frame rate, by 46% and 115%, respectively. Second, we show that a novel QoE-aware bandwidth allocation scheme for bandwidth-intensive apps can improve average video bitrate for multiple users by up to 23%.
  5. Cloud virtualization and multi-tenant networking provide Infrastructure as a Service (IaaS) providers a new and innovative way to offer on-demand services to their customers, such as easy provisioning of new applications and better resource efficiency and scalability. However, existing data-intensive intelligent applications require more powerful processors, higher bandwidth and lower-latency networking service. In order to boost the performance of computing and networking services, as well as reduce the overhead of software virtualization, we propose a new data center network design based on OpenStack. Specifically, we map the OpenStack networking services to the hardware switch and utilize hardware-accelerated L2 switch and L3 routing to solve the software limitations, as well as achieve software-like scalability and flexibility. We design our prototype system via the Arista Software-Defined-Networking (SDN) switch and provide an automatic script which abstracts the service layer that decouples OpenStack from the physical network infrastructure, thereby providing vendor-independence. We have evaluated the performance improvement in terms of bandwidth, delay, and system resource utilization using various tools and under various Quality-of-Service (QoS) constraints. Our solution demonstrates improved cloud scaling and network efficiency via only one touch point to control all vendors' devices in the data center.