Routing strives to connect all the Internet, but compete: political pressure threatens routing fragmentation; architectural changes such as private clouds, carrier-grade NAT, and firewalls make connectivity conditional; and commercial disputes create partial reachability for days or years. This paper suggests \emph{persistent, partial reachability is fundamental to the Internet} and an underexplored problem. We first \emph{derive a conceptual definition of the Internet core} based on connectivity, not authority. We identify \emph{peninsulas}: persistent, partial connectivity; and \emph{islands}: when computers are partitioned from the Internet core. Second, we develop algorithms to observe each across the Internet, and apply them to two existing measurement systems: Trinocular, where 6 locations observe 5M networks frequently, and RIPE Atlas, where 13k locations scan the DNS roots frequently. Cross-validation shows our findings are stable over \emph{three years of data}, and consistent with as few as 3 geographically-distributed observers. We validate peninsulas and islands against CAIDA Ark, showing good recall (0.94) and bounding precision between 0.42 and 0.82. Finally, our work has broad practical impact: we show that \emph{peninsulas are more common than Internet outages}. Factoring out peninsulas and islands as noise can \emph{improve existing measurement systems}; their ``noise'' is $$5\times$$ to $$9.7\times$$ larger than the operational events in RIPE's DNSmon. We show that most peninsula events are routing transients (45\%), but most peninsula-time (90\%) is due to a few (7\%) long-lived events. Our work helps inform Internet policy and governance, with our neutral definition showing no single country or organization can unilaterally control the Internet core.
more »
« less
RIPE IPMAP ACTIVE GEOLOCATION: MECHANISM AND PERFORMANCE EVALUATION
Knowledge about the geographic locations of Internet routers and servers is highly valuable for research on various aspects of Internet structure, performance, economics, and security. Whereas databases for geolocation are commercially available and targeted mostly at end hosts, RIPE offers an open IPmap platform, including its single-radius engine, for geolocation of core Internet infrastructure. This paper introduces the research community to the IPmap single-radius engine and evaluates effectiveness of this method versus commercial geolocation databases NetAcuity and GeoLite2. Access to ground truth constitutes a major challenge in conducting such evaluation studies. The paper collects IP addresses for its study from three sources: virtual machines from the Ring of the Netherlands Network Operators’ Group, M-Lab Pods operated by Google, and CAIDA’s Ark monitors. The ground truth dataset is further diversified through addition of IP addresses that are small latency away from Ark monitors. The evaluation considers accuracy, coverage, and consistency of geolocation as well as effectiveness of the single-radius method for different types of autonomous systems. The paper manually analyzes a problematic case where single-radius mistakenly geolocates an IP address of a Budapest-based router to Vienna. Finally, the paper provides recommendations to both users and developers of the single-radius method and discusses limitations of the reported evaluation. The main conclusion is that the IPmap single-radius engine geolocates core Internet infrastructure more accurately than the considered commercial databases and that Internet researchers can greatly benefit from using the IPmap platform for their geolocation needs.
more »
« less
- Award ID(s):
- 1724853
- PAR ID:
- 10186681
- Date Published:
- Journal Name:
- Computer communication review
- Volume:
- 50
- ISSN:
- 1943-5819
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Research in the area of internet-of-things, cyber physical- systems, and smart health often employ sensor systems at residences for continuous monitoring. Such research oriented residential monitoring systems (RRMSs) usually face two major challenges, long-term reliable operation management and validation of system functionality with minimal human effort. Targeting these two challenges, this paper describes a monitor of monitoring systems with ground-truth validation capabilities, M2G. It consists of two subsystems, the Monitor2 system and the Ground-truth validation system. The Monitor2 system encapsulates a flexible set of general-purpose components to monitor the operation and connectivity of heterogeneous sensor devices (e.g. smart watches, smart phones, microphones, beacons, etc.), a local base-station, as well as a cloud server. It provides a user-friendly interface and supports different types of RRMSs in various contexts. The system also features a ground truth validation system to support obtaining ground truth in the field. Additionally, customized alerts can be sent to remote administrators and other personnel to report any dysfunction or inaccuracy of the system in real time. M2G is applied to three very different case studies: the M2FED system which monitors family eating dynamics, an in-home wireless sensing system for monitoring nighttime agitation, and the BESI system which monitors behavioral and environmental parameters to predict health events and to provide interventions. The results indicate that M2G is a comprehensive system that (i) requires small cost in time and effort to adapt to an existing RRMS, (ii) provides reliable data collection and reduction in data loss by detecting faults in real-time, and (iii) provides a convenient and timely ground truth validation facility.more » « less
-
In previous papers, Lehr and Sicker (2018a,b) argued that the changing character of our telecommunications infrastructure called for a new regulatory approach, with a new Communications Act to define the duties and authorities of a reconceptualized FCC (what we call newFCC in this paper). Today's Internet ecosystem is comprised of multiple digital network platforms organized into a multi-layer architecture. Lower layer IP platforms provided by access and backbone ISPs collectively support the Internet, on which complementors can build higher-layer platforms, such as the platforms provided by powerful firms such as Google, Microsoft, Amazon, Facebook and Apple. These firms control and operate multiple platforms within the larger Internet ecosystem. When dominant platform providers pursue multi-platform strategies in an effort to capture or control a market, such strategies confound current methods for defining markets and assessing market power. This paper draws on the layered platform nature of the Internet ecosystem, as described in Claffy and Clark (2014), to illustrate how this layered character of today’s Internet ecosystem calls for new regulatory authority. This paper draws on the layered platform model to scope the duties for an agency (or agencies) with sector-specific expertise.more » « less
-
Society increasingly relies on the Internet as a critical infrastructure. Inter-domain routing is a core Internet protocol that enables trac to flow globally across independent networks. Concerns about Internet infrastructure security have prompted policymakers to promote stronger routing security and the Resource Public Key Infrastructure (RPKI) in particular. RPKI is a cryptographic framework to secure routing that was standardized in 2012. In 2024, almost 50% of routed IP address blocks are still not covered by RPKI certificates. It is unclear what barriers are preventing net- works from adopting RPKI. This paper investigates networks with low RPKI adoption to understand where and why adoption is low or non-existent. We find that networks’ geographical area of service, size, business category and complexity of address space delegation impact RPKI adoption. Our analysis may help direct policymakers’ efforts to promote RPKI adoption and improve the state of routing security.more » « less
-
Since the exhaustion of unallocated IP addresses at the Internet Assigned Numbers Authority (IANA), a market for IPv4 addresses has emerged. In complement to purchasing address space, leasing IP addresses is becoming increasingly popular. Leasing provides a cost-effective alternative for organizations that seek to scale up without a high upfront investment. However, malicious actors also benefit from leasing as it enables them to rapidly cycle through different addresses, circumventing security measures such as IP blocklisting. We explore the emerging IP leasing market and its implications for Internet security. We examine leasing market data, leveraging blocklists as an indirect measure of involvement in various forms of network abuse. In February 2025, leased prefixes were 2.89× more likely to be flagged by blocklists compared to non-leased prefixes. This result raises questions about whether the IP leasing market should be subject to closer scrutiny.more » « less
An official website of the United States government

