Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article was written in 2021 when masked language models have been successfully applied in nlp(Bert, word2vec, glove, etc). However at the time, it was unclear how the same technique could be applied to vision tasks because unlike language which has a limited vocab, you can't explicitly assign a probability to every possible image. Since then researchers have already made significant progress with techniques like contrastive learning(simclr), self distillation (BYOL, DINO), masked image models, etc. A cookbook of self-supervised learning is a good source to learn more about this topic. https://arxiv.org/abs/2304.12210


SimCLR and others are older than 2021, BYOL is even mentioned in the blogpost. But your link indeed points to a more comprehensive overview.


You are correct that SimCLR and BYOL were released one year earlier. Sorry I worded it poorly. By "at the time", I meant the period of time when masked language models just found success in NLP.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: