This content will become publicly available on July 24, 2025
- PAR ID:
- 10544905
- Publisher / Repository:
- 2024 International Conference on Machine Learning (ICML)
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
We design online algorithms for the fair allocation of public goods to a set of N agents over a sequence of T rounds and focus on improving their performance using predictions. In the basic model, a public good arrives in each round, and every agent reveals their value for it upon arrival. The algorithm must irrevocably decide the investment in this good without exceeding a total budget of B across all rounds. The algorithm can utilize (potentially noisy) predictions of each agent’s total value for all remaining goods. The algorithm’s performance is measured using a proportional fairness objective, which informally demands that every group of agents be rewarded proportional to its size and the cohesiveness of its preferences. We show that no algorithm can achieve better than Θ(T/B) proportional fairness without predictions. With reasonably accurate predictions, the situation improves significantly, and Θ(log(T/B)) proportional fairness is achieved. We also extend our results to a general setting wherein a batch of L public goods arrive in each round and O(log(min(N,L) ·T/B)) proportional fairness is achieved. Our exact bounds are parameterized as a function of the prediction error, with performance degrading gracefully with increasing errors.more » « less
-
We design online algorithms for fair allocation of public goods to a set of N agents over a sequence of T rounds and focus on improving their performance using predictions. In the basic model, a public good arrives in each round, and every agent reveals their value for it upon arrival. The algorithm must irrevocably decide the investment in this good without exceeding a total budget of B across all rounds. The algorithm can utilize (potentially noisy) predictions of each agent’s total value for all remaining goods. The algorithm's performance is measured using a proportional fairness objective, which informally demands that every group of agents be rewarded proportional to its size and the cohesiveness of its preferences. We show that no algorithm can achieve better than Θ(T/B) proportional fairness without predictions. With reasonably accurate predictions, the situation improves significantly, and Θ(log(T/B)) proportional fairness is achieved. We also extend our results to a general setting wherein a batch of L public goods arrive in each round and O(log(min(N,L)T/B)) proportional fairness is achieved. Our exact bounds are parameterized as a function of the prediction error, with performance degrading gracefully with increasing errors.
-
Abstract Applying machine learning to clinical outcome prediction is challenging due to imbalanced datasets and sensitive tasks that contain rare yet critical outcomes and where equitable treatment across diverse patient groups is essential. Despite attempts, biases in predictions persist, driven by disparities in representation and exacerbated by the scarcity of positive labels, perpetuating health inequities. This paper introduces , a synthetic data generation approach leveraging large language models, to address these issues. enhances algorithmic performance and reduces bias by creating realistic, anonymous synthetic patient data that improves representation and augments dataset patterns while preserving privacy. Through experiments on multiple datasets, we demonstrate that boosts mortality prediction performance across diverse subgroups, achieving up to a 21% improvement in F1 Score without requiring additional data or altering downstream training pipelines. Furthermore, consistently reduces subgroup performance gaps, as shown by universal improvements in performance and fairness metrics across four experimental setups.
-
With more recognition being given to the diverse and changing demographics in education, there is a need to understand how well computer science education is meeting the needs of all learners as it starts to infiltrate K-12 schools. The CAPE framework is a newer model for assessing the equitable delivery of computer science education and can be used to understand a school’s capacity to offer equitable computer science (CS) education, equitable student access to CS education, equitable student participation in CS, and equitable experiences of students taking CS. Since the CAPE framework is a new way to research CS education through an equity-lens, there are few, if any, frameworks that can be leveraged to explore research questions in a complex, multi-school intervention. To address this gap, we used a design-based research approach to create and determine the feasibility of a new model, Theory of Impacts, informed by the CAPE framework (the ToI-CAPE model), for evaluating a multi-school intervention. In this article, we provide a detailed explanation of creating and using the ToI-CAPE model for a specific intervention and the feasibility of using ToI-CAPE across factors based in experiences and how to use this model in other research and evaluation projects. Overall, the use of the ToI-CAPE model can be used to shed light on the critical subcomponents and agents at work in the intervention and the actions necessary across these components and agents to support intended outcomes.
-
Agmon, N ; An, B ; Ricci, A ; Yeoh, W. (Ed.)In multiagent systems that require coordination, agents must learn diverse policies that enable them to achieve their individual and team objectives. Multiagent Quality-Diversity methods partially address this problem by filtering the joint space of policies to smaller sub-spaces that make the diversification of agent policies tractable. However, in teams of asymmetric agents (agents with different objectives and capabilities), the search for diversity is primarily driven by the need to find policies that will allow agents to assume complementary roles required to work together in teams. This work introduces Asymmetric Island Model (AIM), a multiagent framework that enables populations of asymmetric agents to learn diverse complementary policies that foster teamwork via dynamic population size allocation on a wide variety of team tasks. The key insight of AIM is that the competitive pressure arising from the distribution of policies on different team-wide tasks drives the agents to explore regions of the policy space that yield specializations that generalize across tasks. Simulation results on multiple variations of a remote habitat problem highlight the strength of AIM in discovering robust synergies that allow agents to operate near-optimally in response to the changing team composition and policies of other agents.more » « less