skip to main content

Title: Virtual Wires: Rethinking WiFi networks
WiFi is the dominant means for home Internet access, yet is frequently a performance bottleneck. Without reliable, satisfactory performance at the last hop, end-to-end quality of service (QoS) efforts will fail. Three major reasons for WiFi bottlenecking performance are its: 1) inherent wireless channel characteristics, 2) approach to access control of the shared broadcast channel, and 3) impact on transport layer protocols, such as TCP, that operate end-to-end, and over-react to the loss or delay caused by the single WiFi link. In this paper, we leverage the philosophy of centralization in modern networking and present our cross layer design to address the problem. Specifically, we introduce centralized control at the point of entry/egress into the WiFi network. Based on network conditions measured from buffer sizes, airtime and throughput, flows are scheduled to the optimal utility. Unlike most existing WiFi QoS approaches, {\em our design only relies on transparent modifications, requiring no changes to the network (including link layer) protocols, applications, or user intervention}. Through extensive experimental investigation, we show that our design significantly enhances the reliability and predictability of WiFi performance, providing a ``virtual wire''-like link to the targeted application.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN)
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. With the advent of 5G, supporting high-quality game streaming applications on edge devices has become a reality. This is evidenced by a recent surge in cloud gaming applications on mobile devices. In contrast to video streaming applications, interactive games require much more compute power for supporting improved rendering (such as 4K streaming) with the stipulated frames-per second (FPS) constraints. This in turn consumes more battery power in a power-constrained mobile device. Thus, the state-of-the-art gaming applications suffer from lower video quality (QoS) and/or energy efficiency. While there has been a plethora of recent works on optimizing game streaming applications, to our knowledge, there is no study that systematically investigates the design pairs on the end-to-end game streaming pipeline across the cloud, network, and edge devices to understand the individual contributions of the different stages of the pipeline for improving the overall QoS and energy efficiency. In this context, this paper presents a comprehensive performance and power analysis of the entire game streaming pipeline consisting of the server/cloud side, network, and edge. Through extensive measurements with a high-end workstation mimicking the cloud end, an open-source platform (Moonlight-GameStreaming) emulating the edge device/mobile platform, and two network settings (WiFi and 5G) we conduct a detailed measurement-based study with seven representative games with different characteristics. We characterize the performance in terms of frame latency, QoS, bitrate, and energy consumption for different stages of the gaming pipeline. Our study shows that the rendering stage and the encoding stage at the cloud end are the bottlenecks to support 4K streaming. While 5G is certainly more suitable for supporting enhanced video quality with 4K streaming, it is more expensive in terms of power consumption compared to WiFi. Further, fluctuations in 5G network quality can lead to huge frame drops thus affecting QoS, which needs to be addressed by a coordinated design between the edge device and the server. Finally, the network interface and the decoder units in a mobile platform need more energy-efficient design to support high quality games at a lower cost. These observations should help in designing more cost-effective future cloud gaming platforms. 
    more » « less
  2. When people connect to the Internet with their mobile devices, they do not often think about the security of their data; however, the prevalence of rogue access points has taken advantage of a false sense of safety in unsuspecting victims. This paper analyzes the methods an attacker would use to create rogue WiFi access points using software-defined radio (SDR). To construct a rogue access point, a few essential layers of WiFi need simulation: the physical layer, link layer, network layer, and transport layer. Radio waves carrying WiFi packets, transmitted between two Universal Software Radio Peripherals (USRPs), emulate the physical layer. The link layer consists of the connection between those same USRPs communicating directly to each other, and the network layer expands on this communication by using the network tunneling/network tapping (TUN/TAP) interfaces to tunnel IP packets between the host and the access point. Finally, the establishment of the transport layer constitutes transceiving the packets that pass through the USRPs. In the end, we found that creating a rogue access point and capturing the stream of data from a fabricated "victim" on the Internet was effective and cheap with SDRs as inexpensive as $20 USD. Our work aims to expose how a cybercriminal could carry out an attack like this in order to prevent and defend against them in the future. 
    more » « less
  3. Modern data center storage systems are invariably networked to allow for consolidation and flexible management of storage. They also include high-performance storage devices based on flash or other emerging technologies, generally accessed through low-latency and high-throughput protocols such as Non-volatile Memory Express (NVMe) (or its derivatives) carried over the network. With the increasing complexity and data-centric nature of the applications, properly configuring the quality of service (QoS) for the storage path has become crucial for ensuring the desired application performance. Such QoS is substantially influenced by the QoS in the network path, in the access protocol, and in the storage device. In this article, we define a new transport-level QoS mechanism for the network segment and demonstrate how it can augment and coordinate with the access-level QoS mechanism defined for NVMe, and a similar QoS mechanism configured in the device. We show that the transport QoS mechanism not only provides the desired QoS to different classes of storage accesses but is also able to protect the access to the shared persistent memory devices located along with the storage but requiring much lower latency than storage. We demonstrate that a proper coordinated configuration of the three QoS’s on the path is crucial to achieve the desired differentiation, depending on where the bottlenecks appear.

    more » « less
  4. null (Ed.)
    Traditional end-host network stacks are struggling to keep up with rapidly increasing datacenter access link bandwidths due to their unsustainable CPU overheads. Motivated by this, our community is exploring a multitude of solutions for future network stacks: from Linux kernel optimizations to partial hardware o!oad to clean-slate userspace stacks to specialized host network hardware. The design space explored by these solutions would bene"t from a detailed understanding of CPU ine#ciencies in existing network stacks. This paper presents measurement and insights for Linux kernel network stack performance for 100Gbps access link bandwidths. Our study reveals that such high bandwidth links, coupled with relatively stagnant technology trends for other host resources (e.g., core speeds and count, cache sizes, NIC bu$er sizes, etc.), mark a fundamental shift in host network stack bottlenecks. For instance, we "nd that a single core is no longer able to process packets at line rate, with data copy from kernel to application bu$ers at the receiver becoming the core performance bottleneck. In addition, increase in bandwidth-delay products have outpaced the increase in cache sizes, resulting in ine#cient DMA pipeline between the NIC and the CPU. Finally, we "nd that traditional loosely-coupled design of network stack and CPU schedulers in existing operating systems becomes a limiting factor in scaling network stack performance across cores. Based on insights from our study, we discuss implications to design of future operating systems, network protocols, and host hardware. 
    more » « less
  5. Vanbever, Laurent ; Zhang, Irene (Ed.)
    In response to concerns about protocol ossification and privacy, post-TCP transport protocols such as QUIC and WebRTC include end-to-end encryption and authentication at the transport layer. This makes their packets opaque to middleboxes, freeing the transport protocol to evolve but preventing some in-network innovations and performance improvements. This paper describes sidekick protocols: an approach to in-network assistance for opaque transport protocols where in-network intermediaries help endpoints by sending information adjacent to the underlying connection, which remains opaque and unmodified on the wire. A key technical challenge is how the sidekick connection can efficiently refer to ranges of packets of the underlying connection without the ability to observe cleartext sequence numbers. We present a mathematical tool called a quACK that concisely represents a selective acknowledgment of opaque packets, without access to cleartext sequence numbers. In real-world and emulation-based evaluations, the sidekick improved performance in several scenarios: early retransmission over lossy Wi-Fi paths, proxy acknowledgments to save energy, and a path-aware congestion-control mechanism we call PACUBIC that emulates a “split” connection. 
    more » « less