Evaluating the Fairness of Fine-Tuning Strategies in Self-Supervised Learning
AuthorsJason Ramapuram*, Dan Busbridge*, Russ Webb
AuthorsJason Ramapuram*, Dan Busbridge*, Russ Webb
*=Equal Contribution
In this work we examine how fine-tuning impacts the fairness of contrastive Self-Supervised Learning (SSL) models. Our findings indicate that Batch Normalization (BN) statistics play a crucial role, and that updating only the BN statistics of a pre-trained SSL backbone improves its downstream fairness (36% worst subgroup, 25% mean subgroup gap). This procedure is competitive with supervised learning, while taking 4.4x less time to train and requiring only 0.35% as many parameters to be updated. Finally, inspired by recent work in supervised learning, we find that updating BN statistics and training residual skip connections (12.3% of the parameters) achieves parity with a fully fine-tuned model, while taking 1.33x less time to train.
November 18, 2022research area Computer Vision, research area Methods and Algorithmsconference NeurIPS
November 15, 2022research area Computer Vision, research area FairnessWorkshop at NeurIPS