null
(Ed.)
Artificial intelligence nowadays plays an increasingly
prominent role in our life since decisions that were once
made by humans are now delegated to automated systems. A
machine learning algorithm trained based on biased data, however,
tends to make unfair predictions. Developing classification
algorithms that are fair with respect to protected attributes of the
data thus becomes an important problem. Motivated by concerns
surrounding the fairness effects of sharing and few-shot machine
learning tools, such as the Model Agnostic Meta-Learning [1]
framework, we propose a novel fair fast-adapted few-shot meta-learning
approach that efficiently mitigates biases during meta train
by ensuring controlling the decision boundary covariance
that between the protected variable and the signed distance
from the feature vectors to the decision boundary. Through
extensive experiments on two real-world image benchmarks over
three state-of-the-art meta-learning algorithms, we empirically
demonstrate that our proposed approach efficiently mitigates
biases on model output and generalizes both accuracy and
fairness to unseen tasks with a limited amount of training
samples.
more »
« less