Finetuning Transformers with JAX + Haiku
Walking through a port of the RoBERTa pre-trained model to JAX + Haiku, then fine-tuning the model to solve a downstream task.
Finetune: Scikit-learn Style Model Finetuning for NLP
Finetune is a library that allows users to leverage state-of-the-art pretrained NLP models for a wide variety of downstream tasks.
Retrieve: Automate wget-ting pre-trained models
No-frills library to download pre-trained models, cache it and return the local path.
Scaling Clinical Information Extraction with Multitask Learning
BiT: Exploring Large-Scale Pre-training for Compute
We are excited to share the best BiT models pre-trained on public datasets, along with code in TF2, Jax, and PyTorch.
All The Ways You Can Compress BERT
In this post I’ll list and briefly taxonomize all the papers I’ve seen compressing BERT.
The State of Transfer Learning in NLP
This post expands on the NAACL 2019 tutorial on Transfer Learning in NLP. It highlights key insights and takeaways and provides updates based on recent ...
Natural Language Processing: Pretraining - d2l
An interactive deep learning book with code, math, and discussions, based on the NumPy interface.
Data Augmentation | How to use Deep Learning With Limited Data
This article is a comprehensive review of Data Augmentation techniques for Deep Learning, specific to images.
1 - 9
Share a project
Share something interesting you found that's made with ML.
Share what you've made with ML.