skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Closed paths in graphs vs. voting theory
Features of graphs that hinder finding closed paths with particular properties, as represented by the Traveling Salesperson Problem—TSP, are identified for three classes of graphs. Removing these terms leads to a companion graph with identical closed path properties that is easier to analyze. A surprise is that these troubling graph factors are precisely what is needed to analyze certain voting methods, while the companion graph’s terms are what cause voting theory complexities as manifested by Arrow’s Theorem. This means that the seemingly separate goals of analyzing closed paths in graphs and analyzing voting methods are complementary: components of data terms that assist in one of these areas are the source of troubles in the other. Consequences for standard decision methods are in Sects. 2.5, 3.7 and the companion paper (Saari in Theory Decis 91(3):377–402, 2021). The emphasis here is on paths in graphs; incomplete graphs are similarly handled.  more » « less
Award ID(s):
1923164
PAR ID:
10530982
Author(s) / Creator(s):
Publisher / Repository:
Springer Nature
Date Published:
Journal Name:
Theory and Decision
ISSN:
0040-5833
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper presents HGEN that pioneers ensemble learning for heterogeneous graphs. We argue that the heterogeneity in node types, nodal features, and local neighborhood topology poses significant challenges for ensemble learning, particularly in accommodating diverse graph learners. Our HGEN framework ensembles multiple learners through a meta-path and transformation-based optimization pipeline to uplift classification accuracy. Specifically, HGEN uses meta-path combined with random dropping to create Allele Graph Neural Networks (GNNs), whereby the base graph learners are trained and aligned for later ensembling. To ensure effective ensemble learning, HGEN presents two key components:1) a residual-attention mechanism to calibrate allele GNNs of different meta-paths, thereby enforcing node embeddings to focus on more informative graphs to improve base learner accuracy, and 2) a correlation-regularization term to enlarge the disparity among embedding matrices generated from different meta-paths, thereby enriching base learner diversity. We analyze the convergence of HGEN and attest its higher regularization magnitude over simple voting. Experiments on five heterogeneous networks validate that HGEN consistently outperforms its state-of-the-art competitors by substantial margin. Codes are available at https://github.com/Chrisshen12/HGEN. 
    more » « less
  2. Problems with majority voting over pairs as represented by Arrow’s Theorem and those of finding the lengths of closed paths as captured by the Traveling Salesperson Problem (TSP) appear to have nothing in common. In fact, they are connected. As shown, pairwise voting and a version of the TSP share the same domain where each system can be simplified by restricting it to complementary regions to eliminate extraneous terms. Central for doing so is the Borda Count, where it is shown that its outcome most accurately reflects the voter preferences. 
    more » « less
  3. Without imposing restrictions on a weighted graph’s arc lengths, symmetry structures cannot be expected. But, they exist. To find them, the graphs are decomposed into a component that dictates all closed path properties (e.g., shortest and longest paths), and a superfluous component that can be removed. The simpler remaining graph exposes inherent symmetry structures that form the basis for all closed path properties. For certain asymmetric problems, the symmetry is that of three-cycles; for the general undirected setting it is a type of four-cycles; for general directed problems with asymmetric costs, it is a product of three and four cycles. Everything extends immediately to incomplete graphs. 
    more » « less
  4. null (Ed.)
    Graph Neural Networks (GNNs) have been studied through the lens of expressive power and generalization. However, their optimization properties are less well understood. We take the first step towards analyzing GNN training by studying the gradient dynamics of GNNs. First, we analyze linearized GNNs and prove that despite the non-convexity of training, convergence to a global minimum at a linear rate is guaranteed under mild assumptions that we validate on real-world graphs. Second, we study what may affect the GNNs’ training speed. Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution. Empirical results confirm that our theoretical results for linearized GNNs align with the training behavior of nonlinear GNNs. Our results provide the first theoretical support for the success of GNNs with skip connections in terms of optimization, and suggest that deep GNNs with skip connections would be promising in practice. 
    more » « less
  5. ter Beek, Maurice; Koutny, Maciej; Rozenberg, Grzegorz (Ed.)
    For a family of sets we consider elements that belong to the same sets within the family as companions. The global dynamics of a reactions system (as introduced by Ehrenfeucht and Rozenberg) can be represented by a directed graph, called a transition graph, which is uniquely determined by a one-out subgraph, called the 0-context graph. We consider the companion classes of the outsets of a transition graph and introduce a directed multigraph, called an essential motion, whose vertices are such companion classes. We show that all one-out graphs obtained from an essential motion represent 0-context graphs of reactions systems with isomorphic transition graphs. All such 0-context graphs are obtained from one another by swapping the outgoing edges of companion vertices. 
    more » « less