skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on March 10, 2026

Title: Tiered Cloud Routing: Methodology, Latency, and Improvement
Large cloud providers including AWS, Azure, and Google Cloud offer two tiers of network services to their customers: one class uses the providers' private wide area networks (WAN-transit) to carry a customer's traffic as much as possible, and the other uses the public internet (inet-transit). Little is known about how each cloud provider configures its network to offer different transit services, how well these services work, and whether the quality of those services can be further improved. In this work, we conduct a large-scale study to answer these questions. Using RIPE Atlas probes as vantage points, we explore how traffic enters and leaves each cloud's WAN. In addition, we measure the access latency of the WAN-transit and the inet-transit service of each cloud and compare it with that of an emulated performance-based routing strategy. Our study shows that despite the cloud providers' intention to carry customers' traffic on its WAN to the maximum extent possible, for about 12% (Azure) and 13% (Google) of our vantage points, traffic exits the cloud WAN early at cloud edges more than 5000km away from the vantage points' nearest cloud edges. In contrast, more than 84% (AWS), 78% (Azure), and 81% (Google) of vantage points enter a cloud WAN within a 500km radius of their respective locations. Moreover, we find that cloud providers employ different routing strategies to implement the inet-transit service, leading to transit policies that may deviate from their advertised service descriptions. Finally, we find that a performance-based routing strategy can significantly reduce latencies in all three cloud providers for 4% to 85% of vantage point and cloud region pairs.  more » « less
Award ID(s):
2225448
PAR ID:
10658013
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
Proceedings of the ACM on Measurement and Analysis of Computing Systems
Volume:
9
Issue:
1
ISSN:
2476-1249
Page Range / eLocation ID:
1 to 41
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Serverless computing is an emerging paradigm in which an application's resource provisioning and scaling are managed by third-party services. Examples include AWS Lambda, Azure Functions, and Google Cloud Functions. Behind these services' easy-to-use APIs are opaque, complex infrastructure and management ecosystems. Taking on the viewpoint of a serverless customer, we conduct the largest measurement study to date, launching more than 50,000 function instances across these three services, in order to characterize their architectures, performance, and resource management efficiency. We explain how the platforms isolate the functions of different accounts, using either virtual machines or containers, which has important security implications. We characterize performance in terms of scalability, coldstart latency, and resource efficiency, with highlights including that AWS Lambda adopts a bin-packing-like strategy to maximize VM memory utilization, that severe contention between functions can arise in AWS and Azure, and that Google had bugs that allow customers to use resources for free. 
    more » « less
  2. Serverless computing services are offered by major cloud service providers such as Google Cloud Platform, Amazon Web Services, and Microsoft Azure. The primary purpose of the services is to offer efficiency and scalability in modern software development and IT operations while reducing overall costs and operational complexity. However, prospective customers often question which serverless service will best meet their organizational and business needs. This study analyzed the features, usability, and performance of three serverless cloud computing platforms: Google Cloud’s Cloud Run, Amazon Web Service’s App Runner, and Microsoft Azure’s Container Apps. The analysis was conducted with a containerized mobile application designed to track real-time bus locations for San Antonio public buses on specific routes and provide estimated arrival times for selected bus stops. The study evaluated various system-related features, including service configuration, pricing, and memory and CPU capacity, along with performance metrics such as container latency, distance matrix API response time, and CPU utilization for each service. The results of the analysis revealed that Google’s Cloud Run demonstrated better performance and usability than AWS’s App Runner and Microsoft Azure’s Container Apps. Cloud Run exhibited lower latency and faster response time for distance matrix queries. These findings provide valuable insights for selecting an appropriate serverless cloud service for similar containerized web applications. 
    more » « less
  3. null (Ed.)
    With the hardware costs becoming cheaper day by day and with industry giants focus, Cloud computing has exploded and reached leaps and bounds in the last 15 years. With the power of creating, using and destroying virtual machines in the cloud at the tip of mouse click, industries have started moving their core applications to the cloud. This has reduced the hassle for industries to maintain the hardware by themselves. Tech giants like Amazon, Microsoft and Google are head of the game and the fierce competition between them has led to astonishing innovation. With so many players in the cloud market, it is essential for cloud users to know how each of the services provided by these cloud service providers are performing against each other. In this paper we have evaluated the performance of famous OLTP benchmark TPC-C on these cloud providers. It is observed that Amazon's AWS has performed better than Microsoft Azure and Google Cloud Platform in terms of the number of transactions/orders per second, and I/O reads/writes. We have done the extended comparison with respect to transaction throughput, database throughput and Machine throughput. 
    more » « less
  4. Today’s problems require a plethora of analytics tasks to be conducted to tackle state-of-the-art computational challenges posed in society impacting many areas including health care, automotive, banking, natural language processing, image detection, and many more data analytics-related tasks. Sharing existing analytics functions allows reuse and reduces overall effort. However, integrating deployment frameworks in the age of cloud computing are often out of reach for domain experts. Simple frameworks are needed that allow even non-experts to deploy and host services in the cloud. To avoid vendor lock-in, we require a generalized composable analytics service framework that allows users to integrate their services and those offered in clouds, not only by one, but by many cloud compute and service providers.We report on work that we conducted to provide a service integration framework for composing generalized analytics frame-works on multi-cloud providers that we call our Generalized AI Service (GAS) Generator. We demonstrate the framework’s usability by showcasing useful analytics workflows on various cloud providers, including AWS, Azure, and Google, and edge computing IoT devices. The examples are based on Scikit learn so they can be used in educational settings, replicated, and expanded upon. Benchmarks are used to compare the different services and showcase general replicability. 
    more » « less
  5. Serverless computing is a new cloud programming and deployment paradigm that is receiving wide-spread uptake. Serverless offerings such as Amazon Web Services (AWS) Lambda, Google Functions, and Azure Functions automatically execute simple functions uploaded by developers, in response to cloud-based event triggers. The serverless abstraction greatly simplifies integration of concurrency and parallelism into cloud applications, and enables deployment of scalable distributed systems and services at very low cost. Although a significant first step, the serverless abstraction requires tools that software engineers can use to reason about, debug, and optimize their increasingly complex, asynchronous applications. Toward this end, we investigate the design and implementation of GammaRay, a cloud service that extracts causal dependencies across functions and through cloud services, without programmer intervention. We implement GammaRay for AWS Lambda and evaluate the overheads that it introduces for serverless micro-benchmarks and applications written in Python. 
    more » « less