Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available May 6, 2025
-
Free, publicly-accessible full text available April 15, 2025
-
Free, publicly-accessible full text available April 15, 2025
-
Free, publicly-accessible full text available December 1, 2024
-
The goal of this short document is to explain why recent developments in the Internet's infrastructure are problematic. As context, we note that the Internet was originally designed to provide a simple universal service - global end-to-end packet delivery - on which a wide variety of end-user applications could be built. The early Internet supported this packet-delivery service via an interconnected collection of commercial Internet Service Providers (ISPs) that we will refer to collectively as the public Internet. The Internet has fulfilled its packet-delivery mission far beyond all expectations and is now the dominant global communications infrastructure. By providing a level playing field on which new applications could be deployed, the Internet has enabled a degree of innovation that no one could have foreseen. To improve performance for some common applications, enhancements such as caching (as in content-delivery networks) have been gradually added to the Internet. The resulting performance improvements are so significant that such enhancements are now effectively necessary to meet current content delivery demands. Despite these tangible benefits, this document argues that the way these enhancements are currently deployed seriously undermines the sustainability of the public Internet and could lead to an Internet infrastructure that reaches fewer people and is largely concentrated among only a few large-scale providers. We wrote this document because we fear that these developments are now decidedly tipping the Internet's playing field towards those who can deploy these enhancements at massive scale, which in turn will limit the degree to which the future Internet can support unfettered innovation. This document begins by explaining our concerns but goes on to articulate how this unfortunate fate can be avoided. To provide more depth for those who seek it, we provide a separate addendum with further detail.
Free, publicly-accessible full text available October 30, 2024 -
null (Ed.)While prior work has explored many proposed datacenter designs, only two designs, Clos-based and expander-based, are generally considered practical because they can scale using commodity switching chips. Prior work has used two diferent metrics, bisection band- width and throughput, for evaluating these topologies at scale. Little is known, theoretically or practically, how these metrics relate to each other. Exploiting characteristics of these topologies, we prove an upper bound on their throughput, then show that this upper bound better estimates worst-case throughput than all previously proposed throughput estimators and scales better than most of them. Using this upper bound, we show that for expander-based topologies, unlike Clos, beyond a certain size of the network, no topology can have full throughput, even if it has full bisection band- width; in fact, even relatively small expander-based topologies fail to achieve full throughput. We conclude by showing that using throughput to evaluate datacenter performance instead of bisection bandwidth can alter conclusions in prior work about datacenter cost, manageability, and reliability.more » « less