Abstract Most of studied social interactions arise from dyadic relations. An exception is Heider Balance Theory that postulates the existence of triad dynamics, which however has been elusive to observe. Here, we discover a sufficient condition for the Heider dynamics observability: assigning the edge signs according to multiple opinions of connected agents. Using longitudinal records of university student mutual contacts and opinions, we create a coevolving network on which we introduce models of student interactions. These models account for: multiple topics of individual student opinions, influence of such opinions on dyadic relations, and influence of triadic relations on opinions. We show that the triadic influence is empirically measurable for static and dynamic observables when signs of edges are defined by multidimensional differences between opinions on all topics. Yet, when these signs are defined by a difference between opinions on each topic separately, the triadic interactions’ influence is indistinguishable from noise.
more »
« less
Identifying Susceptible Agents in Time Varying Opinion Dynamics through Compressive Measurements
We provide a compressive-measurement based method to detect susceptible agents who may receive misinformation through their contact with ‘stubborn agents’ whose goal is to influence the opinions of agents in the network. We consider a DeGroot-type opinion dynamics model where regular agents revise their opinions by linearly combining their neighbors’ opinions, but stubborn agents, while influencing others, do not change their opinions. Our proposed method hinges on estimating the temporal difference vector of network-wide opinions, computed at time instances when the stubborn agents interact. We show that this temporal difference vector has approximately the same support as the locations of the susceptible agents. Moreover, both the interaction instances and the temporal difference vector can be estimated from a small number of aggregated opinions. The performance of our method is studied both analytically and empirically. We show that the detection error decreases when the social network is better connected, or when the stubborn agents are ‘less talkative’.
more »
« less
- Award ID(s):
- 1714672
- PAR ID:
- 10064338
- Date Published:
- Journal Name:
- ICASSP 2018
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Altafini, Claudio; Como, Giacomo; Hendrickx, Julien M.; Olshevsky, Alexander; Tahbez-Salehi, Alireza (Ed.)We study a social learning model in which agents iteratively update their beliefs about the true state of the world using private signals and the beliefs of other agents in a non-Bayesian manner. Some agents are stubborn, meaning they attempt to convince others of an erroneous true state (modeling fake news). We show that while agents learn the true state on short timescales, they "forget" it and believe the erroneous state to be true on longer timescales. Using these results, we devise strategies for seeding stubborn agents so as to disrupt learning, which outperform intuitive heuristics and give novel insights regarding vulnerabilities in social learning.more » « less
-
Consider public health officials aiming to spread awareness about a new vaccine in a community interconnected by a social network. How can they distribute information with minimal resources, so as to avoid polarization and ensure community-wide convergence of opinion? To tackle such challenges, we initiate the study of sample complexity of opinion formation in networks. Our framework is built on the recognized opinion formation game, where we regard each agent’s opinion as a data-derived model, unlike previous works that treat opinions as data-independent scalars. The opinion model for every agent is initially learned from its local samples and evolves game-theoretically as all agents communicate with neighbors and revise their models towards an equilibrium. Our focus is on the sample complexity needed to ensure that the opinions converge to an equilibrium such that every agent’s final model has low generalization error. Our paper has two main technical results. First, we present a novel polynomial time optimization framework to quantify the total sample complexity for arbitrary networks, when the underlying learning problem is (generalized) linear regression. Second, we leverage this optimization to study the network gain which measures the improvement of sample complexity when learning over a network compared to that in isolation. Towards this end, we derive network gain bounds for various network classes including cliques, star graphs, and random regular graphs. Additionally, our framework provides a method to study sample distribution within the network, suggesting that it is sufficient to allocate samples inversely to the degree. Empirical results on both synthetic and real-world networks strongly support our theoretical findings.more » « less
-
Matni, N.; Morari, M.; Pappas, G. J. (Ed.)We address the problem of learning the legitimacy of other agents in a multiagent network when an unknown subset is comprised of malicious actors. We specifically derive results for the case of directed graphs and where stochastic side information, or observations of trust, is available. We refer to this as “learning trust” since agents must identify which neighbors in the network are reliable, and we derive a learning protocol to achieve this. We also provide analytical results showing that under this protocol i) agents can learn the legitimacy of all other agents almost surely, and ii) the opinions of the agents converge in mean to the true legitimacy of all other agents in the network. Lastly, we provide numerical studies showing that our convergence results hold for various network topologies and variations in the number of malicious agents.more » « less
-
We address the problem of learning the legitimacy of other agents in a multiagent network when an unknown subset is comprised of malicious actors. We specifically derive results for the case of directed graphs and where stochastic side information, or observations of trust, is available. We refer to this as “learning trust” since agents must identify which neighbors in the network are reliable, and we derive a protocol to achieve this. We also provide analytical results showing that under this protocol i) agents can learn the legitimacy of all other agents almost surely, and that ii) the opinions of the agents converge in mean to the true legitimacy of all other agents in the network. Lastly, we provide numerical studies showing that our convergence results hold in practice for various network topologies and variations in the number of malicious agents in the network. Keywords: Multiagent systems, adversarial learning, directed graphs, networked systemsmore » « less
An official website of the United States government

