skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: RAVEN: Improving Interactive Latency for the Connected Car
Increasingly, vehicles sold today are connected cars: they offer vehicle-to-infrastructure connectivity through built-in WiFi and cellular interfaces, and they act as mobile hotspots for devices in the vehicle. We study the connection quality available to connected cars today, focusing on user-facing, latency-sensitive applications. We find that network latency varies significantly and unpredictably at short time scales and that high tail latency substantially degrades user experience. We also find an increase in coverage options available due to commercial WiFi offerings and that variations in latency across network options are not well-correlated. Based on these findings, we develop RAVEN, an in-kernel MPTCP scheduler that mitigates tail latency and network unpredictability by using redundant transmission when confidence about network latency predictions is low. RAVEN has several novel design features. It operates transparently, without application modification or hints, to improve interactive latency. It seamlessly supports three or more wireless networks. Its in-kernel implementation allows proactive cancellation of transmissions made unnecessary through redundancy. Finally, it explicitly considers how the age of measurements affects confidence in predictions, allowing better handling of interactive applications that transmit infrequently and networks that exhibit periods of temporary poor performance. Results from speech, music, and recommender applications in both emulated and live vehicle experiments show substantial improvement in application response time  more » « less
Award ID(s):
1717064
PAR ID:
10113923
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ACM MobiCom 2018
Page Range / eLocation ID:
557 to 572
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Rapid delay variations in today’s access networks impair the QoE of low-latency, interactive applications, such as video conferencing. To tackle this problem, we propose Athena, a framework that correlates high-resolution measurements from Layer 1 to Layer 7 to remove the fog from the window through which today’s video-conferencing congestion-control algorithms see the network. This cross-layer view of the network empowers the networking community to revisit and re-evaluate their network designs and application scheduling and rate-adaptation algorithms in light of the complex, heterogeneous networks that are in use today, paving the way for network-aware applications and application-aware networks. 
    more » « less
  2. null (Ed.)
    Container networking, which provides connectivity among containers on multiple hosts, is crucial to building and scaling container-based microservices. While overlay networks are widely adopted in production systems, they cause significant performance degradation in both throughput and latency compared to physical networks. This paper seeks to understand the bottlenecks of in-kernel networking when running container overlay networks. Through profiling and code analysis, we find that a prolonged data path, due to packet transformation in overlay networks, is the culprit of performance loss. Furthermore, existing scaling techniques in the Linux network stack are ineffective for parallelizing the prolonged data path of a single network flow. We propose FALCON, a fast and balanced container networking approach to scale the packet processing pipeline in overlay networks. FALCON pipelines software interrupts associated with different network devices of a single flow on multiple cores, thereby preventing execution serialization of excessive software interrupts from overloading a single core. FALCON further supports multiple network flows by effectively multiplexing and balancing software interrupts of different flows among available cores. We have developed a prototype of FALCON in Linux. Our evaluation with both micro-benchmarks and real-world applications demonstrates the effectiveness of FALCON, with significantly improved performance (by 300% for web serving) and reduced tail latency (by 53% for data caching). 
    more » « less
  3. With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this paper, we build a wearable system that uses multi-channel audio sensors embedded in a headset to help detect and locate cars from their honks, engine and tire noises, and warn pedestrians of imminent dangers of approaching cars. We demonstrate that using a segmented architecture and implementation consisting of headset-mounted audio sensors, a front-end hardware that performs signal processing and feature extraction, and machine learning based classification on a smartphone, we are able to provide early danger detection in real-time, from up to 60m distance, near 100% precision on the vehicle detection and alert the user with low latency. 
    more » « less
  4. Fog computing has been advocated as an enabling technology for computationally intensive services in connected smart vehicles. Most existing works focus on analyzing and opti- mizing the queueing and workload processing latencies, ignoring the fact that the access latency between vehicles and fog/cloud servers can sometimes dominate the end-to-end service latency. This motivates the work in this paper, where we report a five- month urban measurement study of the wireless access latency between a connected vehicle and a fog computing system sup- ported by commercially available multi-operator LTE networks. We propose AdaptiveFog, a novel framework for autonomous and dynamic switching between different LTE operators that implement fog/cloud infrastructure. The main objective here is to maximize the service confidence level, defined as the probability that the tolerable latency threshold for each supported type of service can be guaranteed. AdaptiveFog has been implemented on a smart phone app, running on a moving vehicle. The app periodically measures the round-trip time between the vehicle and fog/cloud servers. An empirical spatial statistic model is established to characterize the spatial variation of the latency across the main driving routes of the city. To quantify the perfor- mance difference between different LTE networks, we introduce the weighted Kantorovich-Rubinstein (K-R) distance. An optimal policy is derived for the vehicle to dynamically switch between LTE operators’ networks while driving. Extensive analysis and simulation are performed based on our latency measurement dataset. Our results show that AdaptiveFog achieves around 30% and 50% improvement in the confidence level of fog and cloud latency, respectively. 
    more » « less
  5. Kernel bypass systems have demonstrated order of magnitude improvements in throughput and tail latency for network-intensive applications relative to traditional operating systems (OSes). To achieve such excellent performance, however, they rely on dedicated resources (e.g., spinning cores, pinned memory) and require application rewriting. This is unattractive to cloud operators because they aim to densely pack applications, and rewriting cloud software requires a massive investment of valuable developer time. For both reasons, kernel bypass, as it exists, is impractical for the cloud. In this paper, we show these compromises are not necessary to unlock the full benefits of kernel bypass. We present Junction, the first kernel bypass system that can pack thousands of instances on a machine while providing compatibility with unmodified Linux applications. Junction achieves high density through several advanced NIC features that reduce pinned memory and the overhead of monitoring large numbers of queues. It maintains compatibility with minimal overhead through optimizations that exploit a shared address space with the application. Junction scales to 19–62× more instances than existing kernel bypass systems and can achieve similar or better performance without code changes. Furthermore, Junction delivers significant performance benefits to applications previously unsupported by kernel bypass, including those that depend on runtime systems like Go, Java, Node, and Python. In a comparison to native Linux, Junction increases throughput by 1.6–7.0× while using 1.2–3.8× less cores across seven applications. 
    more » « less