skip to main content


Title: Driving in the fog: Latency measurement, modeling, and optimization of LTE-based fog computing for smart vehicles
Fog computing has been advocated as an enabling technology for computationally intensive services in connected smart vehicles. Most existing works focus on analyzing and opti- mizing the queueing and workload processing latencies, ignoring the fact that the access latency between vehicles and fog/cloud servers can sometimes dominate the end-to-end service latency. This motivates the work in this paper, where we report a five- month urban measurement study of the wireless access latency between a connected vehicle and a fog computing system sup- ported by commercially available multi-operator LTE networks. We propose AdaptiveFog, a novel framework for autonomous and dynamic switching between different LTE operators that implement fog/cloud infrastructure. The main objective here is to maximize the service confidence level, defined as the probability that the tolerable latency threshold for each supported type of service can be guaranteed. AdaptiveFog has been implemented on a smart phone app, running on a moving vehicle. The app periodically measures the round-trip time between the vehicle and fog/cloud servers. An empirical spatial statistic model is established to characterize the spatial variation of the latency across the main driving routes of the city. To quantify the perfor- mance difference between different LTE networks, we introduce the weighted Kantorovich-Rubinstein (K-R) distance. An optimal policy is derived for the vehicle to dynamically switch between LTE operators’ networks while driving. Extensive analysis and simulation are performed based on our latency measurement dataset. Our results show that AdaptiveFog achieves around 30% and 50% improvement in the confidence level of fog and cloud latency, respectively.  more » « less
Award ID(s):
1813401 1822071 1731164
NSF-PAR ID:
10105602
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE SECON 2019 Conference
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In recent years, the addition of billions of Internet of Thing (IoT) device spawned a massive demand for computing service near the edge of the network. Due to latency, limited mobility, and location awareness, cloud computing is not capable enough to serve these devices. As a result, the focus is shifting more towards distributed platform service to put ample computing power near the edge of the networks. Thus, paradigms such as Fog and Edge computing are gaining attention from researchers as well as business stakeholders. Fog computing is a new computing paradigm, which places computing nodes in between the Cloud and the end user to reduce latency and increase availability. As an emerging technology, Fog computing also brings newer security challenges for the stakeholders to solve. Before designing the security models for Fog computing, it is better to understand the existing threats to Fog computing. In this regard, a thorough threat model can significantly help to identify these threats. Threat modeling is a sophisticated engineering process by which a computer-based system is analyzed to discover security flaws. In this paper, we applied two popular security threat modeling processes - CIAA and STRIDE - to identify and analyze attackers, their capabilities and motivations, and a list of potential threats in the context of Fog computing. We posit that such a systematic and thorough discussion of a threat model for Fog computing will help security researchers and professionals to design secure and reliable Fog computing systems. 
    more » « less
  2. In recent years, the addition of billions of Internet of Thing (IoT) device spawned a massive demand for computing service near the edge of the network. Due to latency, limited mobility, and location awareness, cloud computing is not capable enough to serve these devices. As a result, the focus is shifting more towards distributed platform service to put ample com- puting power near the edge of the networks. Thus, paradigms such as Fog and Edge computing are gaining attention from researchers as well as business stakeholders. Fog computing is a new computing paradigm, which places computing nodes in between the Cloud and the end user to reduce latency and increase availability. As an emerging technology, Fog computing also brings newer security challenges for the stakeholders to solve. Before designing the security models for Fog computing, it is better to understand the existing threats to Fog computing. In this regard, a thorough threat model can significantly help to identify these threats. Threat modeling is a sophisticated engineering process by which a computer-based system is analyzed to discover security flaws. In this paper, we applied two popular security threat modeling processes – CIAA and STRIDE – to identify and analyze attackers, their capabilities and motivations, and a list of potential threats in the context of Fog computing. We posit that such a systematic and thorough discussion of a threat model for Fog computing will help security researchers and professionals to design secure and reliable Fog computing systems. 
    more » « less
  3. The development of communication technologies in edge computing has fostered progress across various applications, particularly those involving vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication. Enhanced infrastructure has improved data transmission network availability, promoting better connectivity and data collection from IoT devices. A notable IoT application is with the Intelligent Transportation System (ITS). IoT technology integration enables ITS to access a variety of data sources, including those pertaining to weather and road conditions. Real-time data on factors like temperature, humidity, precipitation, and friction contribute to improved decision-making models. Traditionally, these models are trained at the cloud level, which can lead to communication and computational delays. However, substantial advancements in cloud-to-edge computing have decreased communication relays and increased computational distribution, resulting in faster response times. Despite these benefits, the developments still largely depend on central cloud sources for computation due to restrictions in computational and storage capacity at the edge. This reliance leads to duplicated data transfers between edge servers and cloud application servers. Additionally, edge computing is further complicated by data models predominantly based on data heuristics. In this paper, we propose a system that streamlines edge computing by allowing computation at the edge, thus reducing latency in responding to requests across distributed networks. Our system is also designed to facilitate quick updates of predictions, ensuring vehicles receive more pertinent safety-critical model predictions. We will demonstrate the construction of our system for V2V and V2I applications, incorporating cloud-ware, middleware, and vehicle-ware levels. 
    more » « less
  4. Next-generation distributed computing networks (e.g., edge and fog computing) enable the efficient delivery of delay-sensitive, compute-intensive applications by facilitating access to computation resources in close proximity to end users. Many of these applications (e.g., augmented/virtual reality) are also data-intensive: in addition to user-specific (live) data streams, they require access to shared (static) digital objects (e.g., im-age database) to complete the required processing tasks. When required objects are not available at the servers hosting the associated service functions, they must be fetched from other edge locations, incurring additional communication cost and latency. In such settings, overall service delivery performance shall benefit from jointly optimized decisions around (i) routing paths and processing locations for live data streams, together with (ii) cache selection and distribution paths for associated digital objects. In this paper, we address the problem of dynamic control of data-intensive services over edge cloud networks. We characterize the network stability region and design the first throughput-optimal control policy that coordinates processing and routing decisions for both live and static data-streams. Numerical results demonstrate the superior performance (e.g., throughput, delay, and resource consumption) obtained via the novel multi-pipeline flow control mechanism of the proposed policy, compared with state-of-the-art algorithms that lack integrated stream processing and data distribution control. 
    more » « less
  5. The proliferation of innovative mobile services such as augmented reality, networked gaming, and autonomous driving has spurred a growing need for low-latency access to computing resources that cannot be met solely by existing centralized cloud systems. Mobile Edge Computing (MEC) is expected to be an effective solution to meet the demand for low-latency services by enabling the execution of computing tasks at the network-periphery, in proximity to end-users. While a number of recent studies have addressed the problem of determining the execution of service tasks and the routing of user requests to corresponding edge servers, the focus has primarily been on the efficient utilization of computing resources, neglecting the fact that non-trivial amounts of data need to be stored to enable service execution, and that many emerging services exhibit asymmetric bandwidth requirements. To fill this gap, we study the joint optimization of service placement and request routing in MEC-enabled multi-cell networks with multidimensional (storage-computation-communication) constraints. We show that this problem generalizes several problems in literature and propose an algorithm that achieves close-to-optimal performance using randomized rounding. Evaluation results demonstrate that our approach can effectively utilize the available resources to maximize the number of requests served by low-latency edge cloud servers. 
    more » « less