We provide a unifying framework for the design and analysis of multi-calibrated predictors. By placing the multi-calibration problem in the general setting of multi-objective learning---where learning guarantees must hold simultaneously over a set of distributions and loss functions---we exploit connections to game dynamics to achieve state-of-the-art guarantees for a diverse set of multi-calibration learning problems. In addition to shedding light on existing multi-calibration guarantees and greatly simplifying their analysis, our approach also yields improved guarantees, such as error tolerances that scale with the square-root of group size versus the constant tolerances guaranteed by prior works, and improving the complexity of k-class multi-calibration by an exponential factor of k versus Gopalan et al.. Beyond multi-calibration, we use these game dynamics to address emerging considerations in the study of group fairness and multi-distribution learning. 
                        more » 
                        « less   
                    
                            
                            A Unifying Perspective on Multi-Calibration: Game Dynamics for Multi-Objective Learning
                        
                    
    
            We provide a unifying framework for the design and analysis of multi-calibrated predictors. By placing the multi-calibration problem in the general setting of multi-objective learning---where learning guarantees must hold simultaneously over a set of distributions and loss functions---we exploit connections to game dynamics to achieve state-of-the-art guarantees for a diverse set of multi-calibration learning problems. In addition to shedding light on existing multi-calibration guarantees and greatly simplifying their analysis, our approach also yields improved guarantees, such as error tolerances that scale with the square-root of group size versus the constant tolerances guaranteed by prior works, and improving the complexity of k-class multi-calibration by an exponential factor of k versus Gopalan et al.. Beyond multi-calibration, we use these game dynamics to address emerging considerations in the study of group fairness and multi-distribution learning. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2145898
- PAR ID:
- 10577799
- Publisher / Repository:
- Advances in Neural Information Processing Systems 36 (NeurIPS 2023)
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            We provide a unifying framework for the design and analysis of multi-calibrated and moment- multi-calibrated predictors. Placing the multi-calibration problem in the general setting of multiobjective learning—where learning guarantees must hold simultaneously over a set of distribu- tions and loss functions—we exploit connections to game dynamics to obtain state-of-the-art guarantees for a diverse set of multi-calibration learning problems. In addition to shedding light on existing multi-calibration guarantees, and greatly simplifying their analysis, our ap- proach yields a 1/ε2 improvement in the number of oracle calls compared to the state-of-the-art algorithm of Jung et al. [19] for learning deterministic moment-calibrated predictors and an exponential improvement in k compared to the state-of-the-art algorithm of Gopalan et al. [14] for learning a k-class multi-calibrated predictor. Beyond multi-calibration, we use these game dynamics to address existing and emerging considerations in the study of group fairness and multi-distribution learning.more » « less
- 
            Abstract This paper introduces a distributed adaptive formation control for large‐scale multi‐agent systems (LS‐MAS) that addresses the heavy computational complexity and communication traffic challenges while directly extending conventional distributed control from small scale to large scale. Specifically, a novel hierarchical game theoretic algorithm is developed to provide a feasible theory foundation for solving LS‐MAS distributed optimal formation problem by effectively integrating the mean‐field game (MFG), the Stackelberg game, and the cooperative game. In particular, LS‐MAS is divided into multiple groups geographically with each having one group leader and a significant amount of followers. Then, a cooperative game is used among multi‐group leaders to formulate distributed inter‐group formation control for leaders. Meanwhile, an MFG is adopted for a large number of intra‐group followers to achieve the collective intra‐group formation while a Stackelberg game is connecting the followers with their corresponding leader within the same group to achieve the overall LS‐MAS multi‐group formation behavior. Moreover, a hybrid actor–critic‐based reinforcement learning algorithm is constructed to learn the solution of the hierarchical game‐based optimal distributed formation control. Finally, to show the effectiveness of the presented schemes, numerical simulations and Lyapunov analysis is performed.more » « less
- 
            To overcome the sim-to-real gap in reinforcement learning (RL), learned policies must maintain robustness against environmental uncertainties. While robust RL has been widely studied in single-agent regimes, in multi-agent environments, the problem remains understudied-- despite the fact that the problems posed by environmental uncertainties are often exacerbated by strategic interactions. This work focuses on learning in distributionally robust Markov games (RMGs), a robust variant of standard Markov games, wherein each agent aims to learn a policy that maximizes its own worst-case performance when the deployed environment deviates within its own prescribed uncertainty set. This results in a set of robust equilibrium strategies for all agents that align with classic notions of game-theoretic equilibria. Assuming a non-adaptive sampling mechanism from a generative model, we propose a sample-efficient model-based algorithm (DRNVI) with finite-sample complexity guarantees for learning robust variants of various notions of game-theoretic equilibria. We also establish an information-theoretic lower bound for solving RMGs, which confirms the near-optimal sample complexity of DR-NVI with respect to problem-dependent factors such as the size of the state space, the target accuracy, and the horizon length.more » « less
- 
            We consider a variation on the classical finance problem of optimal portfolio design. In our setting, a large population of consumers is drawn from some distribution over risk tolerances, and each consumer must be assigned to a portfolio of lower risk than her tolerance. The consumers may also belong to underlying groups (for instance, of demographic properties or wealth), and the goal is to design a small number of portfolios that are fair across groups in a particular and natural technical sense. Our main results are algorithms for optimal and near-optimal portfolio design for both social welfare and fairness objectives, both with and without assumptions on the underlying group structure. We describe an efficient algorithm based on an internal two-player zero-sum game that learns near-optimal fair portfolios ex ante and show experimentally that it can be used to obtain a small set of fair portfolios ex post as well. For the special but natural case in which group structure coincides with risk tolerances (which models the reality that wealthy consumers generally tolerate greater risk), we give an efficient and optimal fair algorithm. We also provide generalization guarantees for the underlying risk distribution that has no dependence on the number of portfolios and illustrate the theory with simulation results.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    