Agent-based crawlers are commonly used in network maintenance and information gathering. In order not to disturb the main functionality of the system, whether acting at nodes or being in transit, they need to operate online, perform a single operation fast and use small memory. They should also be preferably deterministic, as crawling agents have limited capabilities of generating a large number of truly random bits. We consider a system in which an agent receives an update, typically an insertion or deletion, of some information upon visiting a node. On request, the agent needs to output hot information, i.e., with the net occurrence above certain frequency threshold. A desired time and memory complexity of such agent should be poly-logarithmic in the number of visited nodes and inversely proportional to the frequency threshold. Ours is the first such agent with rigorous analysis and a complementary almost-matching lower bound.
more » « less- Award ID(s):
- 2131538
- PAR ID:
- 10351872
- Date Published:
- Journal Name:
- Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence. Main Track.
- Page Range / eLocation ID:
- 363 to 369
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
We consider the optimal multi-agent persistent monitoring problem defined by a team of cooperating agents visiting a set of nodes (targets) on a graph with the objective of minimizing a measure of overall node state uncertainty. The solution to this problem involves agent trajectories defined both by the sequence of nodes to be visited by each agent and the amount of time spent at each node. We propose a class of distributed threshold-based parametric controllers through which agent transitions from one node to the next are controlled by thresholds on the node uncertainty. The resulting behavior of the agent-target system is described by a hybrid dynamic system. This enables the use of Infinitesimal Perturbation Analysis (IPA) to determine on-line optimal threshold parameters through gradient descent and thus obtain optimal controllers within this family of threshold-based policies. Simulations are included to illustrate our results and compare them to optimal solutions derived through dynamic programming.more » « less
-
Inertia from rotating masses of generators in power systems influence the instantaneous frequency change when an imbalance between electrical and mechanical power occurs. Renewable energy sources (RES), such as solar and wind power, are connected to the grid via electronic converters. RES connected through converters affect the system's inertia by decreasing it and making it time-varying. This new setting challenges the ability of current control schemes to maintain frequency stability. Proposing adequate controllers for this new paradigm is key for the performance and stability of future power grids. The contribution of this paper is a framework to learn sparse time-invariant frequency controllers in a power system network with a time-varying evolution of rotational inertia. We model power dynamics using a Switched-Affine hybrid system to consider different modes corresponding to different inertia coefficients. We design a controller that uses as features, i.e. input, the systems states. In other words, we design a control proportional to the angles and frequencies. We include virtual inertia in the controllers to ensure stability. One of our findings is that it is possible to restrict communication between the nodes by reducing the number of features in the controller (from 22 to 10 in our case study) without disrupting performance and stability. Furthermore, once communication between nodes has reached a threshold, increasing it beyond this threshold does not improve performance or stability. We find a correlation between optimal feature selection in sparse controllers and the topology of the network.more » « less
-
Processing-in-memory (PIM), where the compute is moved closer to the memory or the data, has been widely explored to accelerate emerging workloads. Recently, different PIM-based systems have been announced by memory vendors to minimize data movement and improve performance as well as energy efficiency. One critical component of PIM is the large amount of compute parallelism provided across many PIM nodes'' or the compute units near the memory. In this work, we provide an extensive evaluation and analysis of real PIM systems based on UPMEM PIM. We show that while there are benefits of PIM, there are also scalability challenges and limitations as the number of PIM nodes increases. In particular, we show how collective communications that are commonly found in many kernels/workloads can be problematic for PIM systems. To evaluate the impact of collective communication in PIM architectures, we provide an in-depth analysis of two workloads on the UPMEM PIM system that utilize representative common collective communication patterns -- AllReduce and All-to-All communication. Specifically, we evaluate 1) embedding tables that are commonly used in recommendation systems that require AllReduce and 2) the Number Theoretic Transform (NTT) kernel which is a critical component of Fully Homomorphic Encryption (FHE) that requires All-to-All communication. We analyze the performance benefits of these workloads and show how they can be efficiently mapped to the PIM architecture through alternative data partitioning. However, since each PIM compute unit can only access its local memory, when communication is necessary between PIM nodes (or remote data is needed), communication between the compute units must be done through the host CPU, thereby severely hampering application performance. To increase the scalability (or applicability) of PIM to future workloads, we make the case for how future PIM architectures need efficient communication or interconnection networks between the PIM nodes that require both hardware and software support.more » « less
-
In the last decade, Tor proved to be a very successful and widely popular system to protect users' anonymity. However, Tor remains a practical system with a variety of limitations, some of which were indeed exploited in the recent past. In particular, Tor's security relies on the fact that a substantial number of its nodes do not misbehave. In this work we introduce, the concept of honey onions, a framework to detect misbehaving Tor relays with HSDir capability. This allows to obtain lower bounds on misbehavior among relays. We propose algorithms to both estimate the number of snooping HSDirs and identify the most likely snoopers. Our experimental results indicate that during the period of the study (72 days) at least 110 such nodes were snooping information about hidden services they host. We reveal that more than half of them were hosted on cloud infrastructure and delayed the use of the learned information to prevent easy traceback.more » « less
-
Neighborhood effects have an important role in evacuation decision-making by a family. Owing to peer influence, neighbors evacuating can motivate a family to evacuate. Paradoxically, if a lot of neighbors evacuate, then the likelihood of an individual or family deciding to evacuate decreases, for fear of crime and looting. Such behavior cannot be captured using standard models of contagion spread on networks, e.g., threshold, independent cascade, and linear threshold models. Here, we propose a new threshold-based graph dynamical system model, 2mode-threshold, which captures this dichotomy. We study theoretically the dynamical properties of 2mode-threshold in different networks, and find significant differences from a standard threshold model. We build and characterize small world networks of Virginia Beach, VA, where nodes are geolocated families (households) in the city and edges are interactions between pairs of families. We demonstrate the utility of our behavioral model through agent-based simulations on these small world networks. We use it to understand evacuation rates in this region, and to evaluate the effects of modeling parameters on evacuation decision dynamics. Specifically, we quantify the effects of (1) network generation parameters, (2) stochasticity in the social network generation process, (3) model types (2mode-threshold vs. standard threshold models), (4) 2mode-threshold model parameters, (5) and initial conditions, on computed evacuation rates and their variability. An illustrative example result shows that the absence of looting effect can overpredict evacuation rates by as much as 50%.more » « less