Mobile devices supporting the "Internet of Things" (IoT), often have limited capabilities in computation, battery energy, and storage space, especially to support resource-intensive applications involving virtual reality (VR), augmented reality (AR), multimedia delivery and artificial intelligence (AI), which could require broad bandwidth, low response latency and large computational power. Edge cloud or edge computing is an emerging topic and technology that can tackle the deficiency of the currently centralized-only cloud computing model and move the computation and storage resource closer to the devices in support of the above-mentioned applications. To make this happen, efficient coordination mechanisms and “offloading” algorithms are needed to allow the mobile devices and the edge cloud to work together smoothly. In this survey paper, we investigate the key issues, methods, and various state-of-the-art efforts related to the offloading problem. We adopt a new characterizing model to study the whole process of offloading from mobile devices to the edge cloud. Through comprehensive discussions, we aim to draw an overall “big picture” on the existing efforts and research directions. Our study also indicates that the offloading algorithms in edge cloud have demonstrated profound potentials for future technology and application development.
more »
« less
A Case for Elevating the Edge to be a Peer of the Cloud
Over the last 20 years, mobile computing has evolved to encompass a wide array of increasingly data-rich applications. Many of these applications were enabled by the Cloud computing revolution, which commoditized server hardware to support vast numbers of mobile users from a few large, centralized data centers. Today, mobile's next stage of evolution is spurred by interest in emerging technologies such as Augmented and Virtual Reality (AR/VR), the Internet of Things (IoT), and Autonomous Vehicles. New applications relying on these technologies often require very low latency response times, increased bandwidth consumption, and the need to preserve privacy. Meeting all of these requirements from the Cloud alone is challenging for several reasons. First, the amount of data generated by devices can quickly saturate the bandwidth of backhaul links to the Cloud. Second, achieving low-latency responses for making decisions on sensed data becomes increasingly difficult the further mobile devices are from centralized Cloud data centers. And finally, regulatory or privacy restrictions on the data generated by devices may require that such data be kept locally. For these reasons, enabling next-generation technologies requires us to reconsider the current trend of serving applications from the Cloud alone.
more »
« less
- Award ID(s):
- 1909346
- PAR ID:
- 10298044
- Date Published:
- Journal Name:
- GetMobile: Mobile Computing and Communications
- Volume:
- 24
- Issue:
- 3
- ISSN:
- 2375-0529
- Page Range / eLocation ID:
- 14 to 19
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The networking industry is offering new services leveraging recent technological advances in connectivity, storage, and computing such as mobile communications and edge computing. In this regard, extended reality, a term encompassing virtual reality, augmented reality, and mixed reality, can provide unprecedented user experience and pioneering service opportunities such as: live concerts, sports, and other events; interactive gaming and entertainment; immersive education, training, and demos. These services require high-bandwidth, low-latency, and reliable connections, and are supported by next-generation ultra-reliable and low-latency communications in the vision of 6G mobile communication systems. In this work, we devise a novel scheme, called backup from different data centers with multicast and adaptive bandwidth provisioning, to admit reliable, low-latency, and high-bandwidth extended reality live streams in next-generation networks. We consider network services where contents are non-cacheable and investigate how backup services can be offered by different data centers with multicast and adaptive bandwidth provisioning. Our proposed service-provisioning scheme provides protection not only against link failures in the physical network but also against computing and storage failures in data centers. We develop scalable algorithms for the service-provisioning scheme and evaluate their performance on various complex network instances in a dynamic environment. Numerical results show that, compared to conventional service-provisioning schemes such as those seeking backup services from the same data center, our proposed service-provisioning scheme efficiently utilizes network resources, ensures higher reliability, and guarantees low latency; hence, it is highly suitable for extended reality live streams.more » « less
-
Interactive mobile applications like web browsing and gaming are known to benefit significantly from low latency networking, as applications communicate with cloud servers and other users' devices. Emerging mobile channel standards have not met these needs: 5G's general-purpose eMBB channel has much higher bandwidth than 4G but empirically offers little improvement for common latency-sensitive applications, while its ultra-low-latency URLLC channel is targeted at only specific applications with very low bandwidth requirements. We explore a different direction for wireless channel design to address the fundamental bandwidth-latency tradeoff: utilizing two channels -- one high bandwidth, one low latency -- simultaneously to improve performance of common Internet applications. We design DChannel, a fine-grained packet-steering scheme that takes advantage of these parallel channels to transparently improve application performance. With 5G channels, our trace-driven and live network experiments show that even though URLLC offers just 1% of the bandwidth of eMBB, using both channels can improve web page load time and responsiveness of common mobile apps by 16-40% compared to using exclusively eMBB. This approach may provide service providers important incentives to make low latency channels available for widespread use.more » « less
-
Interactive mobile applications like web browsing and gaming are known to benefit significantly from low latency networking, as applications communicate with cloud servers and other users' devices. Emerging mobile channel standards have not met these needs: 5G's general-purpose eMBB channel has much higher bandwidth than 4G but empirically offers little improvement for common latency-sensitive applications, while its ultra-low-latency URLLC channel is targeted at only specific applications with very low bandwidth requirements. We explore a different direction for wireless channel design to address the fundamental bandwidth-latency tradeoff: utilizing two channels -- one high bandwidth, one low latency -- simultaneously to improve performance of common Internet applications. We design DChannel, a fine-grained packet-steering scheme that takes advantage of these parallel channels to transparently improve application performance. With 5G channels, our trace-driven and live network experiments show that even though URLLC offers just 1% of the bandwidth of eMBB, using both channels can improve web page load time and responsiveness of common mobile apps by 16-40% compared to using exclusively eMBB. This approach may provide service providers important incentives to make low latency channels available for widespread use.more » « less
-
Recent advancements in cloud computing have driven rapid development in data-intensive smart city applications by providing near real time processing and storage scalability. This has resulted in efficient centralized route planning services such as Google Maps, upon which millions of users rely. Route planning algorithms have progressed in line with the cloud environments in which they run. Current state of the art solutions assume a shared memory model, hence deployment is limited to multiprocessing environments in data centers. By centralizing these services, latency has become the limiting parameter in the technologies of the future, such as autonomous cars. Additionally, these services require access to outside networks, raising availability concerns in disaster scenarios. Therefore, this paper provides a decentralized route planning approach for private fog networks. We leverage recent advances in federated learning to collaboratively learn shared prediction models online and investigate our approach with a simulated case study from a mid-size U.S. city.more » « less
An official website of the United States government

