Evaluating the fairness of fine-tuning strategies in self-supervised learning

10/01/2021
by   Jason Ramapuram, et al.
0

In this work we examine how fine-tuning impacts the fairness of contrastive Self-Supervised Learning (SSL) models. Our findings indicate that Batch Normalization (BN) statistics play a crucial role, and that updating only the BN statistics of a pre-trained SSL backbone improves its downstream fairness (36 supervised learning, while taking 4.4x less time to train and requiring only 0.35 supervised learning, we find that updating BN statistics and training residual skip connections (12.3 fine-tuned model, while taking 1.33x less time to train.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset