skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A unified approach to approximate partial, prize-collecting, and budgeted sweep cover problems
In a sweep cover problem, mobile sensors move around to collect information from positions of interest (PoIs) periodically and timely. A PoI is sweep-covered if it is visited at least once in every time period t. In this paper, we study approximation algorithms on three types of sweep cover problems. The partial sweep cover problem (PSC) aims to use the minimum number of mobile sensors to sweep-cover at least a given number of PoIs. The prize-collecting sweep cover problem aims to minimize the cost of mobile sensors plus the penalties on those PoIs that are not sweep-covered. The budgeted sweep cover problem (BSC) aims to use a budgeted number N of mobile sensors to sweep-cover as many PoIs as possible. We propose a unified approach which can yield approximation algorithms for PSC and PCSC within approximation ratio at most 8, and a bicriteria (4, 1 2 )-approximation algorithm for BSC (that is, no more than 4N mobile sensors are used to sweep-cover at least 1 2 opt PoIs, where opt is the number of PoIs that can be sweep-covered by an optimal solution). Furthermore, our results for PSC and BSC can be extended to their weighted version, and our algorithm for PCSC answers a question proposed in Liang etal. (Theor Comput Sci, 2022) on PCSC  more » « less
Award ID(s):
1907472
PAR ID:
10467152
Author(s) / Creator(s):
; ;
Publisher / Repository:
Springer
Date Published:
Journal Name:
Optimization Letters
Volume:
18
ISSN:
1862-4472
Subject(s) / Keyword(s):
Sweep cover· Partial cover· Prize-collecting cover· Budgeted cover· Approximation algorithm· Bicriteria approximation algorithm
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Evans, Robin; Shpitser, Ilya (Ed.)
    We consider the problem of maximizing submodular functions under submodular constraints by formulating the problem in two ways: \SCSKC and \DiffC. Given two submodular functions $$f$$ and $$g$$ where $$f$$ is monotone, the objective of \SCSKC problem is to find a set $$S$$ of size at most $$k$$ that maximizes $f(S)$ under the constraint that $$g(S)\leq \theta$$, for a given value of $$\theta$$. The problem of \DiffC focuses on finding a set $$S$$ of size at most $$k$$ such that $h(S) = f(S)-g(S)$$ is maximized. It is known that these problems are highly inapproximable and do not admit any constant factor multiplicative approximation algorithms unless NP is easy. Known approximation algorithms involve data-dependent approximation factors that are not efficiently computable. We initiate a study of the design of approximation algorithms where the approximation factors are efficiently computable. For the problem of \SCSKC, we prove that the greedy algorithm produces a solution whose value is at least $$(1-1/e)f(\OPT) - A$, where $$A$$ is the data-dependent additive error. For the \DiffC problem, we design an algorithm that uses the \SCSKC greedy algorithm as a subroutine. This algorithm produces a solution whose value is at least $$(1-1/e)h(\OPT)-B$, where $$B$$ is also a data-dependent additive error. A salient feature of our approach is that the additive error terms can be computed efficiently, thus enabling us to ascertain the quality of the solutions produced. 
    more » « less
  2. We study the problem of covering barrier points by mobile sensors. Each sensor is represented by a point in the plane with the same covering range [Formula: see text] so that any point within distance [Formula: see text] from the sensor can be covered by the sensor. Given a set [Formula: see text] of [Formula: see text] points (called “barrier points”) and a set [Formula: see text] of [Formula: see text] points (representing the “sensors”) in the plane, the problem is to move the sensors so that each barrier point is covered by at least one sensor and the maximum movement of all sensors is minimized. The problem is NP-hard. In this paper, we consider two line-constrained variations of the problem and present efficient algorithms that improve the previous work. In the first problem, all sensors are given on a line [Formula: see text] and are required to move on [Formula: see text] only while the barrier points can be anywhere in the plane. We propose an [Formula: see text] time algorithm for the problem. We also consider the weighted case where each sensor has a weight; we give an [Formula: see text] time algorithm for this case. In the second problem, all barrier points are on [Formula: see text] while all sensors are in the plane but are required to move onto [Formula: see text] to cover all barrier points. We also solve the weighted case in [Formula: see text] time. 
    more » « less
  3. null (Ed.)
    Given an element set E of order n, a collection of subsets [Formula: see text], a cost c S on each set [Formula: see text], a covering requirement r e for each element [Formula: see text], and an integer k, the goal of a minimum partial set multicover problem (MinPSMC) is to find a subcollection [Formula: see text] to fully cover at least k elements such that the cost of [Formula: see text] is as small as possible and element e is fully covered by [Formula: see text] if it belongs to at least r e sets of [Formula: see text]. This problem generalizes the minimum k-union problem (MinkU) and is believed not to admit a subpolynomial approximation ratio. In this paper, we present a [Formula: see text]-approximation algorithm for MinPSMC, in which [Formula: see text] is the maximum size of a set in S. And when [Formula: see text], we present a bicriteria algorithm fully covering at least [Formula: see text] elements with approximation ratio [Formula: see text], where [Formula: see text] is a fixed number. These results are obtained by studying the minimum density subcollection problem with (or without) cardinality constraint, which might be of interest by itself. 
    more » « less
  4. We describe a paradigm for designing parallel algorithms via approximation, and illustrate it on the b-edgecover problem. A b-edgecover of minimum weight in a graph is a subset $$C$$ of its edges such that at least a specified number $b(v)$ of edges in $$C$$ is incident on each vertex $$v$$, and the sum of the edge weights in $$C$$ is minimum. The Greedy algorithm and a variant, the LSE algorithm, provide $3/2$-approximation guarantees in the worst-case for this problem, but these algorithms have limited parallelism. Hence we design two new $$2$$-approximation algorithms with greater concurrency. The MCE algorithm reduces the computation of a b-edgecover to that of finding a b'-matching, by exploiting the relationship between these subgraphs in an approximation context. The LSE-NW is derived from the LSEalgorithm using static edge weights rather than dynamically computing effective edge weights. This relaxation gives LSE a worse approximation guarantee but makes it more amenable to parallelization. We prove that both the MCE and LSE-NW algorithms compute the same b-edgecover with at most twice the weight of the minimum weight edge cover. In practice, the $$2$$-approximation and $3/2$-approximation algorithms compute edge covers of weight within $$10\%$$ the optimal. We implement three of the approximation algorithms, MCE, LSE, and LSE-NW on shared memory multi-core machines, including an Intel Xeon and an IBM Power8 machine with 8 TB memory. The MCE algorithm is the fastest of these by an order of magnitude or more. It computes an edge cover in a graph with billions of edges in $20$ seconds using two hundred threads on the IBM Power8. We also show that the parallel depth and work can be bounded for the Suitor and b-Suitor algorithms when edge weights are random. 
    more » « less
  5. We describe two new 3/2-approximation algorithms and a new 2-approximation algorithm for the minimum weight edge cover problem in graphs. We show that one of the 3/2-approximation algorithms, the Dual cover algorithm, computes the lowest weight edge cover relative to previously known algorithms as well as the new algorithms reported here. The Dual cover algorithm can also be implemented to be faster than the other 3/2-approximation algorithms on serial computers. Many of these algorithms can be extended to solve the 6-Edge cover problem as well. We show the relation of these algorithms to the K-Nearest Neighbor graph construction in semi-supervised learning and other applications. 
    more » « less