skip to main content


This content will become publicly available on August 1, 2025

Title: Towards Re-architecting Today’s Internet for Survivability; NSF Workshop Report
On November 28-29, 2023, Northwestern University hosted a work- shop titled “Towards Re-architecting Today’s Internet for Surviv- ability” in Evanston, Illinois, US. The goal of the workshop was to bring together a group of national and international experts to sketch and start implementing a transformative research agenda for solving one of our community’s most challenging yet important tasks: the re-architecting of tomorrow’s Internet for “survivability”, ensuring that the network is able to fulfill its mission even in the presence of large-scale catastrophic events. This report provides a necessarily brief overview of two full days of active discussions.  more » « less
Award ID(s):
2332178
PAR ID:
10534900
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
ACM SIGCOMM Computer Communication Review
Volume:
54
Issue:
2
ISSN:
0146-4833
Subject(s) / Keyword(s):
Internet, Survivability, Resilience
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Distributed applications enhance their execution by using remote resources. However, distributed execution incurs communication, synchronization, fault-handling, and security overheads. If these overheads are not offset by the yet larger execution enhancement, distribution becomes counterproductive. For maximum benefits, the distribution’s granularity cannot be too fine or too crude; it must be just right. In this paper, we present a novel approach to re-architecting distributed applications, whose distribution granularity has turned ill-conceived. To adjust the distribution of such applications, our approach automatically reshapes their remote invocations to reduce aggregate latency and resource consumption. To that end, our approach insources a remote functionality for local execution, splits it into separate functions to profile their performance, and determines the optimal redistribution based on a cost function. Redistribution strategies combine separate functions into single remotely invocable units. To automate all the required program transformations, our approach introduces a series of domainspecific automatic refactorings. We have concretely realized our approach as an analysis and automatic program transformation infrastructure for the important domain of full-stack JavaScript applications, and evaluated its value, utility, and performance on a series of real-world cross-platform mobile apps. Our evaluation results indicate that our approach can become a useful tool for software developers charged with the challenges of re-architecting distributed applications. 
    more » « less
  2. Abstract This article proposes the solver-aware system architecting framework for leveraging the combined strengths of experts, crowds and specialists to design innovative complex systems. Although system architecting theory has extensively explored the relationship between alternative architecture forms and performance under operational uncertainty, limited attention has been paid to differences due to who generates the solutions. The recent rise in alternative solving methods, from gig workers to crowdsourcing to novel contracting structures emphasises the need for deeper consideration of the link between architecting and solver-capability in the context of complex system innovation. We investigate these interactions through an abstract problem-solving simulation, representing alternative decompositions and solver archetypes of varying expertise, engaged through contractual structures that match their solving type. We find that the preferred architecture changes depending on which combinations of solvers are assigned. In addition, the best hybrid decomposition-solver combinations simultaneously improve performance and cost, while reducing expert reliance. To operationalise this new solver-aware framework, we induce two heuristics for decomposition-assignment pairs and demonstrate the scale of their value in the simulation. We also apply these two heuristics to reason about an example of a robotic manipulator design problem to demonstrate their relevance in realistic complex system settings. 
    more » « less
  3. The advent of ultrabroadband Internet connectivity brings a 2-3 orders of magnitude jump in the capacity of access networks (a.k.a. the “last mile”). Beyond mere capacity increase, this leap represents a qualitative shift in the overall Internet environment. Therefore, we argue that only by seizing the opportunity to re-think the way we structure network applications and services can we realize the full potential ultrabroadband provides. Specifically, with ultrabroadband residential networks, we have the opportunity to re-center our digital lives around our residence, similar to how our physical lives generally center around our homes. To this end, we introduce a new appliance in home networks–a “home point of presence”–that provides a variety of services to the users in the house regardless of where they are physically located and connected to the network. We illustrate the utility of this appliance by discussing a range of new services that both bring new functionality to the users and improve performance of existing applications. 
    more » « less
  4. With the advancement and dominant service of Internet videos, the content-based video deduplication system becomes an essential and dependent infrastructure for Internet video service. However, the explosively growing video data on the Internet challenges the system design and implementation for its scalability in several ways. (1) Although the quantization-based indexing techniques are effective for searching visual features at a large scale, the costly re-training over the complete dataset must be done periodically. (2) The high-dimensional vectors for visual features demand increasingly large SSD space, degrading I/O performance. (3) Videos crawled from the Internet are diverse, and visually similar videos are not necessarily the duplicates, increasing deduplication complexity. (4) Most videos are edited ones. The duplicate contents are more likely discovered as clips inside the videos, demanding processing techniques with close attention to details. To address above-mentioned issues, we propose Maze, a full-fledged video deduplication system. Maze has an ANNS layer that indexes and searches the high dimensional feature vectors. The architecture of the ANNS layer supports efficient reads and writes and eliminates the data migration caused by re-training. Maze adopts the CNN-based feature and the ORB feature as the visual features, which are optimized for the specific video deduplication task. The features are compact and fully reside in the memory. Acoustic features are also incorporated in Maze so that the visually similar videos but having different audio tracks are recognizable. A clip-based matching algorithm is developed to discover duplicate contents at a fine granularity. Maze has been deployed as a production system for two years. It has indexed 1.3 billion videos and is indexing ~800 thousand videos per day. For the ANNS layer, the average read latency is 4 seconds and the average write latency is at most 4.84 seconds. The re-training over the complete dataset is no longer required no matter how many new data sets are added, eliminating the costly data migration between nodes. Maze recognizes the duplicate live streaming videos with both the similar appearance and the similar audio at a recall of 98%. Most importantly, Maze is also cost-effective. For example, the compact feature design helps save 5800 SSDs and the computation resources devoted to running the whole system decrease to 250K standard cores per billion videos. 
    more » « less
  5. Domain specific computing is an idea that has been pro-posed as a path forward given the slowing of Moore’s Law and the breakdown of Dennard scaling. Two fundamental questions include: (1) how does one define a domain; and (2) how does one go about architecting hardware that performs well for that domain? We present our preliminary work towards answering these questions. 
    more » « less