skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Centralization of a Decentralized Video Platform - A First Characterization Of PeerTube
PeerTube is an open-source video sharing platform built as a decentralized alternative to YouTube. With software like Mastodon and Friendica, PeerTube is part of a series of federated social media platforms built partly in response to growing concerns about centralized control and ownership of the incumbent ones. In this paper, we present the first characterization of PeerTube, including its underlying infrastructure and the content being shared on its network. Our findings reveal concerning trends toward centralization that echo patterns observed in other contexts, exacerbated by the limited degree of content replication. PeerTube instances are mostly located in North America and Western Europe, with about 70% hosted in Germany, the USA, and France, and over 50% hosted on the top 7 ***ASes. We also find that over 92% of videos are stored without any redundancy in spite of PeerTube's native support for video redundancy.  more » « less
Award ID(s):
2211508
PAR ID:
10632715
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
ACM SIGCOMM Computer Communication Review, Volume 54, Number 4. October, 2024.
Date Published:
Journal Name:
ACM SIGCOMM Computer Communication Review
Volume:
54
Issue:
4
ISSN:
0146-4833
Page Range / eLocation ID:
25 to 35
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The InterPlanetary File System (IPFS) has recently gained considerable attention. While prior research has focused on understanding its performance characterization and application support, it remains unclear: (1) what kind of files/content are stored in IPFS, (2) who are providing these files, (3) are these files always accessible, and (4) what affects the file access performance. To answer these questions, in this paper, we perform measurement and analysis on over 4 million files associated with CIDs (content IDs) that appeared in publicly available IPFS datasets. Our results reveal the following key findings: (1) Mixed file accessibility: while IPFS is not designed for a permanent storage, accessing a non-trivial portion of files, such as those of NFTs and video streams, often requires multiple retrieval attempts, potentially blocking NFT transactions and negatively affecting the user experience. (2) Dominance of NFT (non-fungible token) and video files: about 50% of stored files are NFT-related, followed by a large portion of video files, among which about half are pirated movies and adult content. (3) Centralization of content providers: a small number of peers (top-50), mostly cloud nodes hosted by tech companies, serve a large portion (95%) of files, deviating from IPFS's intended design goal. (4) High variation of downloading throughput and lookup time: large file retrievals experience lower average throughput due to more overhead for resolving file chunk CIDs, and looking up files hosted by non-cloud nodes takes longer. We hope that our findings can offer valuable insights for (1) IPFS application developers to take into consideration these characteristics when building applications on top of IPFS, and (2) IPFS system developers to improve IPFS and similar systems to be developed for Web3. 
    more » « less
  2. Pervasive deployment of surveillance cameras today poses enormous scalability challenges to video analytics systems operating over many camera feeds. Currently, there are few indexing tools to organize video feeds beyond what is provided by a standard file system. Recent video analytic systems implement application-specific frame profiling and sampling techniques to reduce the number of raw videos processed, leveraging frame-level redundancy or manually labeled spatial-temporal correlation between cameras. This paper presents Video-zilla, a standalone indexing layer between video query systems and a video store to organize video data. We propose a video data unit abstraction, semantic video stream (SVS), based on a notion of distance between objects in the video. SVS implicitly captures scenes, which is missing from current video content characterization and a middle ground between individual frames and an entire camera feed. We then build a hierarchical index that exposes the semantic similarity both within and across camera feeds, such that Video-zilla can quickly cluster video feeds based on their content semantics without manual labeling. We implement and evaluate Video-zilla in three use cases: object identification queries, clustering for training specialized DNNs, and archival services. In all three cases, Video-zilla reduces the time complexity of inter-camera video analytics from linear with the number of cameras to sublinear, and reduces query resource usage by up to 14x compared to using frame-level or spatial-temporal similarity built into existing query systems. 
    more » « less
  3. Sprocket is a highly configurable, stage-based, scalable, serverless video processing framework that exploits intra-video parallelism to achieve low latency. Sprocket enables developers to program a series of operations over video content in a modular, extensible manner. Programmers implement custom operations, ranging from simple video transformations to more complex computer vision tasks, in a simple pipeline specification language to construct custom video processing pipelines. Sprocket then handles the underlying access, encoding and decoding, and processing of video and image content across operations in a highly parallel manner. In this paper we describe the design and implementation of the Sprocket system on the AWS Lambda serverless cloud infrastructure, and evaluate Sprocket under a variety of conditions to show that it delivers its performance goals of high parallelism, low latency, and low cost (10s of seconds to process a 3,600 second video 1000-way parallel for less than $3). 
    more » « less
  4. null (Ed.)
    Internet users have suffered collateral damage in tussles over paid peering between large ISPs and large content providers. The issue will arise again when the FCC considers a new net neutrality order. In this paper, we model the effect of paid peering fees on broadband prices and consumer surplus. We first consider the effect of paid peering on broadband prices. ISPs assert that paid peering revenue is offset by lower broadband prices, and that ISP profits remain unchanged. Content providers assert that paid peering fees do not result in lower broadband prices, but simply increase ISP profits. We adopt a two-sided market model in which an ISP maximizes profit by setting broadband prices and a paid peering price. To separately evaluate the effect on consumers who utilize video streaming and on consumers who don’t, we model two broadband plans: a basic plan for consumers whose utility principally derives from email and web browsing, and a premium plan for consumers with significant incremental utility from video streaming. Our result shows that the claims of the ISPs and of the content providers are both incorrect. Paid peering fees reduce the premium plan price; however, the ISP passes on to its customers only a portion of the revenue from paid peering. We find that ISP profit increases but video streaming profit decreases as an ISP moves from settlement-free peering to paid peering price. We next consider the effect of paid peering on consumer surplus. ISPs assert that paid peering increases consumer surplus because it eliminates an inherent subsidy of consumers with high video streaming use by consumers without. Content providers assert that paid peering decreases consumer surplus because paid peering fees are passed onto consumers through higher video streaming prices and because there is no corresponding reduction in broadband prices. We simulate a regulated market in which a regulatory agency determines the maximum paid peering fee (if any) to maximize consumer surplus, an ISP sets its broadband prices to maximize profit, and a content provider sets its video streaming price. Simulation parameters are chosen to reflect typical broadband prices, video streaming prices, ISP rate of return, and content provider rate of return. We find that consumer surplus is a uni-modal function of the paid peering fee. The paid peering fee that maximizes consumer surplus depends on elasticities of demand for broadband and for video streaming. However, consumer surplus is maximized when paid peering fees are significantly lower than those that maximize ISP profit. However, it does not follow that settlement-free peering is always the policy that maximizes consumer surplus. The direct peering price depends critically on the incremental ISP cost per video streaming subscriber; at different costs, it can be negative, zero, or positive. 
    more » « less
  5. null (Ed.)
    The need for mobile applications and mobile programming is increasing due to the continuous rise in the pervasiveness of mobile devices. Developers often refer to video programming tutorials to learn more about mobile programming topics. To find the right video to watch, developers typically skim over several videos, looking at their title, description, and video content in order to determine if they are relevant to their information needs. Unfortunately, the title and description do not always provide an accurate overview, and skimming over videos is time-consuming and can lead to missing important information. We propose a novel approach that locates and extracts the GUI screens showcased in a video tutorial, then selects and displays the most representative ones to provide a GUI-focused overview of the video. We believe this overview can be used by developers as an additional source of information for determining if a video contains the information they need. To evaluate our approach, we performed an empirical study on iOS and Android programming screencasts which investigates the accuracy of our automated GUI extraction. The results reveal that our approach can detect and extract GUI screens with an accuracy of 94%. 
    more » « less