null
(Ed.)
Much of the work in the field of group fairness
addresses disparities between predefined groups
based on protected features such as gender, age,
and race, which need to be available at train, and
often also at test, time. These approaches are
static and retrospective, since algorithms designed
to protect groups identified a priori cannot anticipate and protect the needs of different at-risk
groups in the future. In this work we analyze
the space of solutions for worst-case fairness beyond demographics, and propose Blind Pareto
Fairness (BPF), a method that leverages no-regret
dynamics to recover a fair minimax classifier that
reduces worst-case risk of any potential subgroup
of sufficient size, and guarantees that the remaining population receives the best possible level of
service. BPF addresses fairness beyond demographics, that is, it does not rely on predefined
notions of at-risk groups, neither at train nor at
test time. Our experimental results show that the
proposed framework improves worst-case risk in
multiple standard datasets, while simultaneously
providing better levels of service for the remaining population, in comparison to competing methods.
more »
« less