The Future of (Transfer Learning in) Natural Language Processing
Transfer Learning in Natural Language Processing (NLP): Open questions, current trends, limits, and future directions.
natural-language-processing transfer-learning tutorial video

A walk through interesting papers and research directions in late 2019/early-2020 on: - model size and computational efficiency, - out-of-domain generalization and model evaluation, - fine-tuning and sample efficiency, - common sense and inductive biases.

Don't forget to tag @thomwolf in your comment, otherwise they may not be notified.

Authors community post
Science Lead @ Huggingface Inc.
Share this project
Similar projects
History of Language Models - Alec Radford
A quick history of language models
VirTex: Learning Visual Representations from Textual Annotations
We train CNN+Transformer from scratch from COCO, transfer the CNN to 6 downstream vision tasks, and exceed ImageNet features despite using 10x fewer ...
GLUE Explained: Understanding BERT Through Benchmarks
In this post we take a look at an important NLP benchmark used to evaluate BERT and other transfer learning models!
🦄 How to build a SOTA Conversational AI with Transfer Learning
Train a dialog agent leveraging transfer Learning from an OpenAI GPT and GPT-2 Transformer language model.
Top collections