In many real-world applications such as social network analysis and online advertising/marketing, one of the most important and popular problems is called influence maximization (IM), which finds a set of k seed users that maximize the expected number of influenced user nodes. In practice, however, maximizing the number of influenced nodes may be far from satisfactory for real applications such as opinion promotion and collective buying. In this paper, we explore the importance of stability and triangles in social networks, and formulate a novel problem in the influence spread scenario, named triangular stability maximization , over social networks, and generalize it to a general triangle influence maximization problem, which is proved to be NP-hard. We develop an efficient reverse influence sampling (RIS) based framework for the triangle IM with theoretical guarantees. To enable unbiased estimators, it demands probabilistic sampling of triangles, that is, sampling triangles according to their probabilities. We propose an edge-based triple sampling approach, which is exactly equivalent to probabilistic sampling and avoids costly triangle enumeration and materialization. We also design several pruning and reduction techniques, as well as a cost-model-guided heuristic algorithm. Extensive experiments and a case study over real-world graphs confirm the effectiveness of our proposed algorithms and the superiority of triangular stability maximization and triangle influence maximization. 
                        more » 
                        « less   
                    This content will become publicly available on June 28, 2026
                            
                            Fair Online Influence Maximization
                        
                    
    
            Fair influence maximization in networks has been actively studied to ensure equity in fields like viral marketing and public health. Existing studies often assume an offline setting, meaning that the learner identifies a set of seed nodes with known per-edge activation probabilities. In this paper, we study the problem of fair online influence maximization, i.e., without knowing the ground-truth activation probabilities. The learner in this problem aims to maximally propagate the information among demographic groups, while interactively selecting seed nodes and observing the activation feedback on the fly. We propose Fair Online Influence Maximization (FOIM) framework that can solve the online influence maximization problem under a wide range of fairness notions. Given a fairness notion, FOIM solves the problem with a combinatorial multi-armed bandit algorithm for balancing exploration-exploitation and an offline fair influence maximization oracle for seed nodes selection. FOIM enjoys sublinear regret when the fairness notion satisfies two mild conditions, i.e., monotonicity and bounded smoothness. Our analyses show that common fairness notions, including maximin fairness, diversity fairness, and welfare function, all satisfy the condition, and we prove the corresponding regret upper bounds under these notions. Extensive empirical evaluations on three real-world networks demonstrate the efficacy of our proposed framework. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2403401
- PAR ID:
- 10620943
- Publisher / Repository:
- OpenReview.net
- Date Published:
- Journal Name:
- Transactions on machine learning research
- ISSN:
- 2835-8856
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Online matching markets (OMMs) are commonly used in today’s world to pair agents from two parties (whom we will call offline and online agents) for mutual benefit. However, studies have shown that the algorithms making decisions in these OMMs often leave disparities in matching rates, especially for offline agents. In this article, we propose online matching algorithms that optimize for either individual or group-level fairness among offline agents in OMMs. We present two linear-programming (LP) based sampling algorithms, which achieve competitive ratios at least 0.725 for individual fairness maximization and 0.719 for group fairness maximization. We derive further bounds based on fairness parameters, demonstrating conditions under which the competitive ratio can increase to 100%. There are two key ideas helping us break the barrier of 1-1/𝖾~ 63.2% for competitive ratio in online matching. One is boosting , which is to adaptively re-distribute all sampling probabilities among only the available neighbors for every arriving online agent. The other is attenuation , which aims to balance the matching probabilities among offline agents with different mass allocated by the benchmark LP. We conduct extensive numerical experiments and results show that our boosted version of sampling algorithms are not only conceptually easy to implement but also highly effective in practical instances of OMMs where fairness is a concern.more » « less
- 
            We consider the problem of Influence Maximization (IM), the task of selecting k seed nodes in a social network such that the expected number of nodes influenced is maximized. We propose a community-aware divide-and-conquer framework that involves (i) learning the inherent community structure of the social network, (ii) generating candidate solutions by solving the influence maximization problem for each community, and (iii ) selecting the final set of seed nodes using a novel progressiv e budgeting scheme. Our experiments on real-world social networks show that the proposed framework outperforms the standard methods in terms of run-time and the heuristic methods in terms of influence. We also study the effect of the community structure on the performance of the proposed framework. Our experiments sho w that the community structures with higher modularity lead the proposed framework to perform better in terms of run-time an d influence.more » « less
- 
            Restless multi-armed bandits (RMAB) have been widely used to model sequential decision making problems with constraints. The decision maker (DM) aims to maximize the expected total reward over an infinite horizon under an “instantaneous activation constraint” that at most B arms can be activated at any decision epoch, where the state of each arm evolves stochastically according to a Markov decision process (MDP). However, this basic model fails to provide any fairness guarantee among arms. In this paper, we introduce RMAB-F, a new RMAB model with “long-term fairness constraints”, where the objective now is to maximize the longterm reward while a minimum long-term activation fraction for each arm must be satisfied. For the online RMAB-F setting (i.e., the underlying MDPs associated with each arm are unknown to the DM), we develop a novel reinforcement learning (RL) algorithm named Fair-UCRL. We prove that Fair-UCRL ensures probabilistic sublinear bounds on both the reward regret and the fairness violation regret. Compared with off-the-shelf RL methods, our Fair-UCRL is much more computationally efficient since it contains a novel exploitation that leverages a low-complexity index policy for making decisions. Experimental results further demonstrate the effectiveness of our Fair-UCRL.more » « less
- 
            Banerjee, Arindam; Fukumizu, Kenji (Ed.)Numerous tasks in machine learning and artificial intelligence have been modeled as submodular maximization problems. These problems usually involve sensitive data about individuals, and in addition to maximizing the utility, privacy concerns should be considered. In this paper, we study the general framework of non-negative monotone submodular maximization subject to matroid or knapsack constraints in both offline and online settings. For the offline setting, we propose a differentially private $$(1-\frac{\kappa}{e})$$-approximation algorithm, where $$\kappa\in[0,1]$$ is the total curvature of the submodular set function, which improves upon prior works in terms of approximation guarantee and query complexity under the same privacy budget. In the online setting, we propose the first differentially private algorithm, and we specify the conditions under which the regret bound scales as $$Ø(\sqrt{T})$$, i.e., privacy could be ensured while maintaining the same regret bound as the optimal regret guarantee in the non-private setting.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
