In this article, the author studies resilient strong structural controllability (SSC) in networks with misbehaving agents and edges. The author considers various misbehavior models and identifies the set of input agents offering resilience against such disruptions. The author's approach leverages a graph-based characterization of SSC, utilizing the concept of zero forcing in graphs. Specifically, the author examines three misbehavior models that disrupt the zero forcing process and compromise network SSC. The author then characterizes a leader set that guarantees SSC despite misbehaving nodes and edges, utilizing the concept of leaky forcing—a variation of zero forcing in graphs. The author's main finding reveals that resilience against one misbehavior model inherently provides resilience against others, thus simplifying the design process. Furthermore, the author explores combining multiple networks by augmenting edges between their nodes to achieve SSC in the combined network using a reduced leader set compared to the leader sets of individual networks. The author analyzes the tradeoff between added edges and leader set size in the resulting combined graph. Finally, the author discusses computational aspects and provides numerical evaluations to demonstrate the effectiveness of the author's approach.
more »
« less
Controllability Backbone in Networks
This paper studies the controllability backbone problem in dynamical networks defined over graphs. The main idea of the controllability backbone is to identify a small subset of edges in a given network such that any subnetwork containing those edges/links has at least the same network controllability as the original network while assuming the same set of input/leader vertices. We consider the strong structural controllability (SSC) in our work, which is useful but computationally challenging. Thus, we utilize two lower bounds on the network’s SSC based on the zero forcing notion and graph distances. We provide algorithms to compute controllability backbones while preserving these lower bounds. We thoroughly analyze the proposed algorithms and compute the number of edges in the controllability backbones. Finally, we compare and numerically evaluate our methods on random graphs.
more »
« less
- Award ID(s):
- 2325416
- PAR ID:
- 10529462
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 979-8-3503-0124-3
- Page Range / eLocation ID:
- 2439 to 2444
- Subject(s) / Keyword(s):
- Strong structural controllability, network control, zero forcing, graph distances.
- Format(s):
- Medium: X
- Location:
- Singapore, Singapore
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Neural network based computer vision systems are typically built on a backbone, a pretrained or randomly initialized feature extractor. Several years ago, the default option was an ImageNet-trained convolutional neural network. However, the recent past has seen the emergence of countless backbones pretrained using various algorithms and datasets. While this abundance of choice has led to performance increases for a range of systems, it is difficult for practitioners to make informed decisions about which backbone to choose. Battle of the Backbones (BoB) makes this choice easier by benchmarking a diverse suite of pretrained models, including vision-language models, those trained via self-supervised learning, and the Stable Diffusion backbone, across a diverse set of computer vision tasks ranging from classification to object detection to OOD generalization and more. Furthermore, BoB sheds light on promising directions for the research community to advance computer vision by illuminating strengths and weakness of existing approaches through a comprehensive analysis conducted on more than 1500 training runs. While vision transformers (ViTs) and self-supervised learning (SSL) are increasingly popular, we find that convolutional neural networks pretrained in a supervised fashion on large training sets still perform best on most tasks among the models we consider. Moreover, in apples-to-apples comparisons on the same architectures and similarly sized pretraining datasets, we find that SSL backbones are highly competitive, indicating that future works should perform SSL pretraining.more » « less
-
Neural network based computer vision systems are typically built on a backbone, a pretrained or randomly initialized feature extractor. Several years ago, the default option was an ImageNet-trained convolutional neural network. However, the recent past has seen the emergence of countless backbones pretrained using various algorithms and datasets. While this abundance of choice has led to performance increases for a range of systems, it is difficult for practitioners to make informed decisions about which backbone to choose. Battle of the Backbones (BoB) makes this choice easier by benchmarking a diverse suite of pretrained models, including vision-language models, those trained via self-supervised learning, and the Stable Diffusion backbone, across a diverse set of computer vision tasks ranging from classification to object detection to OOD generalization and more. Furthermore, BoB sheds light on promising directions for the research community to advance computer vision by illuminating strengths and weakness of existing approaches through a comprehensive analysis conducted on more than 1500 training runs. While vision transformers (ViTs) and self-supervised learning (SSL) are increasingly popular, we find that convolutional neural networks pretrained in a supervised fashion on large training sets still perform best on most tasks among the models we consider. Moreover, in apples-to-apples comparisons on the same architectures and similarly sized pretraining datasets, we find that SSL backbones are highly competitive, indicating that future works should perform SSL pretraining.more » « less
-
Cherifi, Hocine; Donduran, Murat; Rocha, Luis; Cherifi, Chantal; Varol, Onur (Ed.)This paper introduces a novel framework for graph sparsification that preserves the essential learning attributes of original graphs, improving computational efficiency and reducing complexity in learning algorithms. We refer to these sparse graphs as “learning backbones.” Our approach leverages the zero-forcing (ZF) phenomenon, a dynamic process on graphs with applications in network control. The key idea is to generate a tree from the original graph that retains critical dynamical properties. By correlating these properties with learning attributes, we construct effective learning backbones. We evaluate the performance of our ZF-based backbones in graph classification tasks across eight datasets and six baseline models. The results demonstrate that our method outperforms existing techniques. Additionally, we explore extensions using node distance metrics to further enhance the framework’s utility.more » « less
-
This paper introduces a novel framework for graph sparsification that preserves the essential learning attributes of original graphs, improving computational efficiency and reducing complexity in learning algorithms. We refer to these sparse graphs as ``learning backbones.'' Our approach leverages the zero-forcing (ZF) phenomenon, a dynamic process on graphs with applications in network control. The key idea is to generate a tree from the original graph that retains critical dynamical properties. By correlating these properties with learning attributes, we construct effective learning backbones. We evaluate the performance of our ZF-based backbones in graph classification tasks across eight datasets and six baseline models. The results demonstrate that our method outperforms existing techniques. Additionally, we explore extensions using node distance metrics to further enhance the framework's utility.more » « less
An official website of the United States government

