Using self-supervision can help learn features that can transfer to a down-stream task, such as classification! In this example, we used rotation predication as our pretext task for feature representation learning. Pretraining our model on rotation prediction prior to training for classification allowed us to achieve 61.7% accuracy, on just 0.3% of the labeled data (180 samples)! Training from scratch with the same amount of data yields an accuracy of 13%. The motivation for using self-supervised learning is the ability to train models with decent accuracy without the need of much labeled data!
Don't forget to tag @AmarSaini in your comment.