We consider single-source shortest path algorithms that perform a sequence of relaxation steps whose ordering depends only on the input graph structure and not on its weights or the results of prior steps. Each step examines one edge of the graph, and replaces the tentative distance to the endpoint of the edge by its minimum with the tentative distance to the start of the edge, plus the edge length. As we prove, among such algorithms, the Bellman-Ford algorithm has optimal complexity for dense graphs and near-optimal complexity for sparse graphs, as a function of the number of edges and vertices in the given graph. Our analysis holds both for deterministic algorithms and for randomized algorithms that find shortest path distances with high probability.
more »
« less
Chunking Tasks for Present-Biased Agents
Everyone puts things off sometimes. How can we combat this tendency to procrastinate? A well-known technique used by instructors is to break up a large project into more manageable chunks. But how should this be done best? Here we study the process of chunking using the graph-theoretic model of present bias introduced by Kleinberg and Oren [2014]. We first analyze how to optimally chunk single edges within a task graph, given a limited number of chunks. We show that for edges on the shortest path, the optimal chunking makes initial chunks easy and later chunks progressively harder. For edges not on the shortest path, optimal chunking is significantly more complex, but we provide an efficient algorithm that chunks the edge optimally. We then use our optimal edge-chunking algorithm to optimally chunk task graphs. We show that with a linear number of chunks on each edge, the biased agent’s cost can be exponentially lowered, to within a constant factor of the true cheapest path. Finally, we extend our model to the case where a task designer must chunk a graph for multiple types of agents simultaneously. The problem grows significantly more complex with even two types of agents, but we provide optimal graph chunking algorithms for two types. Our work highlights the efficacy of chunking as a means to combat present bias.
more »
« less
- Award ID(s):
- 1703846
- PAR ID:
- 10415531
- Date Published:
- Journal Name:
- Twenty-fourth ACM Conference on Electronic Commerce
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)We consider the problem of collective exploration of a known n- node edge-weighted graph by k mobile agents that have limited energy but are capable of energy transfers. The agents are initially placed at an arbitrary subset of nodes in the graph, and each agent has an initial, possibly different, amount of energy. The goal of the exploration problem is for every edge in the graph to be traversed by at least one agent. The amount of energy used by an agent to travel distance x is proportional to x. In our model, the agents can share energy when co-located: when two agents meet, one can transfer part of its energy to the other. For an n-node path, we give an O(n+k) time algorithm that either nds an exploration strategy, or reports that one does not exist. For an n-node tree with l leaves, we give an O(n+lk^2) algorithm that finds an exploration strategy if one exists. Finally, for the general graph case, we show that the problem of deciding if exploration is possible by energy-sharing agents is NP-hard, even for 3-regular graphs. In addition, we show that it is always possible to find an exploration strategy if the total energy of the agents is at least twice the total weight of the edges; moreover, this is asymptotically optimal.more » « less
-
In this paper we leverage the existence of a property in the duplicate data, named duplicate locality, that reveals the fact that multiple duplicate chunks are likely to occur together. In other words, one duplicate chunk is likely to be immediately followed by a sequence of contiguous duplicate chunks. The longer the sequence, the stronger the locality is. After a quantitative analysis of duplicate locality in real-world data, we propose a suite of chunking techniques that exploit the locality to remove almost all chunking cost for deduplicatable chunks in CDC-based deduplication systems. The resulting deduplication method, named RapidCDC, has two salient features. One is that its efficiency is positively correlated to the deduplication ratio. RapidCDC can be as fast as a fixed-size chunking method when applied on data sets with high data redundancy. The other feature is that its high efficiency does not rely on high duplicate locality strength. These attractive features make RapidCDC’s effectiveness almost guaranteed for datasets with high deduplication ratio. Our experimental results with synthetic and real-world datasets show that RapidCDC’s chunking speedup can be up to 33× higher than regular CDC. Meanwhile, it maintains (nearly) the same deduplication ratio.more » « less
-
Cooperative 3D Printing (C3DP), an additive manufacturing platform consisting of a swarm of mobile printing robots, is an emerging technology designed to address the size and printing speed limitations of conventional, gantry-based 3D printers. A typical C3DP process often involves several interconnected stages, including project/job partitioning, job placement on the floor, task scheduling, path planning, and motion planning. In our previous work on project partitioning, we presented a Z-Chunker, which vertically divides a tall print project into multiple jobs to overcome the physical constraints of printers in the Z direction, and an XY Chunker, to partition jobs into discrete chunks, which are allocated to individual printing robots for parallel printing. These geometry partitioning algorithms determine what is to be printed, but other information, such as when, where, and in what order chunks should be printed, is required to carry out the print physically. This paper introduces the first Job Placement Optimizer for C3DP based on Dynamic Dependency List schedule assignment and Conflict-Based Search path planning. Our algorithm determines the optimal locations for all jobs and chunks (i.e., subtasks of a job) on the factory floor to minimize the makespan for C3DP. To validate the proposed approach, we conduct three case studies: a simple geometry with homogeneous jobs in the Z direction and two complex geometries (one with moderate complexity and one relatively more complex) with non-homogeneous jobs in the Z direction. We also performed simulations to understand the impact of other factors, such as the number of robots, the number of jobs, chunking orientation, and the heterogeneity of prints (e.g., when chunks are different in size and materials), on the effectiveness of this placement optimizer.more » « less
-
We study the fully dynamic All-Pairs Shortest Paths (APSP) problem in undirected edge-weighted graphs. Given an n-vertex graph G with non-negative edge lengths, that undergoes an online sequence of edge insertions and deletions, the goal is to support approximate distance queries and shortest-path queries. We provide a deterministic algorithm for this problem, that, for a given precision parameter є, achieves approximation factor (loglogn)2O(1/є3), and has amortized update time O(nєlogL) per operation, where L is the ratio of longest to shortest edge length. Query time for distance-query is O(2O(1/є)· logn· loglogL), and query time for shortest-path query is O(|E(P)|+2O(1/є)· logn· loglogL), where P is the path that the algorithm returns. To the best of our knowledge, even allowing any o(n)-approximation factor, no adaptive-update algorithms with better than Θ(m) amortized update time and better than Θ(n) query time were known prior to this work. We also note that our guarantees are stronger than the best current guarantees for APSP in decremental graphs in the adaptive-adversary setting. In order to obtain these results, we consider an intermediate problem, called Recursive Dynamic Neighborhood Cover (RecDynNC), that was formally introduced in [Chuzhoy, STOC ’21]. At a high level, given an undirected edge-weighted graph G undergoing an online sequence of edge deletions, together with a distance parameter D, the goal is to maintain a sparse D-neighborhood cover of G, with some additional technical requirements. Our main technical contribution is twofolds. First, we provide a black-box reduction from APSP in fully dynamic graphs to the RecDynNC problem. Second, we provide a new deterministic algorithm for the RecDynNC problem, that, for a given precision parameter є, achieves approximation factor (loglogm)2O(1/є2), with total update time O(m1+є), where m is the total number of edges ever present in G. This improves the previous algorithm of [Chuzhoy, STOC ’21], that achieved approximation factor (logm)2O(1/є) with similar total update time. Combining these two results immediately leads to the deterministic algorithm for fully-dynamic APSP with the guarantees stated above.more » « less