skip to main content

Search for: All records

Creators/Authors contains: "Ahmed, R."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available October 1, 2023
  2. Free, publicly-accessible full text available October 1, 2023
  3. Free, publicly-accessible full text available April 1, 2023
  4. This paper introduces a generalized spatial regionalization problem, namely, PRUC ( P -Regions with User-defined Constraint) that partitions spatial areas into homogeneous regions. PRUC accounts for user-defined constraints imposed over aggregate region properties. We show that PRUC is an NP-Hard problem. To solve PRUC, we introduce GSLO (Global Search with Local Optimization), a parallel stochastic regionalization algorithm. GSLO is composed of two phases: (1) Global Search that initially partitions areas into regions that satisfy a user-defined constraint, and (2) Local Optimization that further improves the quality of the partitioning with respect to intra-region similarity. We conduct an extensive experimental study using real datasets to evaluate the performance of GSLO. Experimental results show that GSLO is up to 100× faster than the state-of-the-art algorithms. GSLO provides partitioning that is up to 6× better with respect to intra-region similarity. Furthermore, GSLO is able to handle 4× larger datasets than the state-of-the-art algorithms.
  5. The proliferation of GPS-enabled devices has led to the development of numerous location-based services. These services need to process massive amounts of streamed spatial data in real-time. The current scale of spatial data cannot be handled using centralized systems. This has led to the development of distributed spatial streaming systems. Existing systems are using static spatial partitioning to distribute the workload. In contrast, the real-time streamed spatial data follows non-uniform spatial distributions that are continuously changing over time. Distributed spatial streaming systems need to react to the changes in the distribution of spatial data and queries. This article introduces SWARM, a lightweight adaptivity protocol that continuously monitors the data and query workloads across the distributed processes of the spatial data streaming system and redistributes and rebalances the workloads as soon as performance bottlenecks get detected. SWARM is able to handle multiple query-execution and data-persistence models. A distributed streaming system can directly use SWARM to adaptively rebalance the system’s workload among its machines with minimal changes to the original code of the underlying spatial application. Extensive experimental evaluation using real and synthetic datasets illustrate that, on average, SWARM achieves 2 improvement in throughput over a static grid partitioning that is determinedmore »based on observing a limited history of the data and query workloads. Moreover, SWARM reduces execution latency on average 4 compared with the other technique.« less
  6. A spanner of a graph G is a subgraph H that approximately preserves shortest path distances in G. Spanners are commonly applied to compress computation on metric spaces corresponding to weighted input graphs. Classic spanner constructions can seamlessly handle edge weights, so long as error is measured multiplicatively. In this work, we investigate whether one can similarly extend constructions of spanners with purely additive error to weighted graphs. These extensions are not immediate, due to a key lemma about the size of shortest path neighborhoods that fails for weighted graphs. Despite this, we recover a suitable amortized version, which lets us prove direct extensions of classic +2 and +4 unweighted spanners (both all-pairs and pairwise) to +2W and +4W weighted spanners, where W is the maximum edge weight. Specifically, we show that a weighted graph G contains all-pairs (pairwise) +2W and +4W weighted spanners of size O(n3/2) and O(n7/5) (O(np1/3) and O(np2/7)) respectively. For a technical reason, the +6 unweighted spanner becomes a +8W weighted spanner; closing this error gap is an interesting remaining open problem. That is, we show that G contains all-pairs (pairwise) +8W weighted spanners of size O(n4/3) (O(np1/4)).
  7. We study the multi-level Steiner tree problem: a generalization of the Steiner tree problem in graphs where terminals T require varying priority, level, or quality of service. In this problem, we seek to find a minimum cost tree containing edges of varying rates such that any two terminals u, v with priorities P(u), P(v) are connected using edges of rate min{P(u),P(v)} or better. The case where edge costs are proportional to their rate is approximable to within a constant factor of the optimal solution. For the more general case of non-proportional costs, this problem is hard to approximate with ratio c log log n, where n is the number of vertices in the graph. A simple greedy algorithm by Charikar et al., however, provides a min{2(ln |T | + 1), lρ}-approximation in this setting, where ρ is an approximation ratio for a heuristic solver for the Steiner tree problem and l is the number of priorities or levels (Byrka et al. give a Steiner tree algorithm with ρ ≈ 1.39, for example). In this paper, we describe a natural generalization to the multi-level case of the classical (single-level) Steiner tree approximation algorithm based on Kruskal’s minimum spanning tree algorithm. Wemore »prove that this algorithm achieves an approximation ratio at least as good as Charikar et al., and experimentally performs better with respect to the optimum solution. We develop an integer linear programming formulation to compute an exact solution for the multi-level Steiner tree problem with non-proportional edge costs and use it to evaluate the performance of our algorithm on both random graphs and multi-level instances derived from SteinLib.« less
  8. Readability criteria, such as distance or neighborhood preservation, are often used to optimize node-link representations of graphs to enable the comprehension of the underlying data. With few exceptions, graph drawing algorithms typically optimize one such criterion, usually at the expense of others. We propose a layout approach, Graph Drawing via Gradient Descent, (GD)^2, that can handle multiple readability criteria. (GD)^2 can optimize any criterion that can be described by a smooth function. If the criterion cannot be captured by a smooth function, a non-smooth function for the criterion is combined with another smooth function, or auto-differentiation tools are used for the optimization. Our approach is flexible and can be used to optimize several criteria that have already been considered earlier (e.g., obtaining ideal edge lengths, stress, neighborhood preservation) as well as other criteria which have not yet been explicitly optimized in such fashion (e.g., vertex resolution, angular resolution, aspect ratio). We provide quantitative and qualitative evidence of the effectiveness of (GD)^2 with experimental data and a functional prototype: http://hdc.cs.arizona.edu/~mwli/graph-drawing/