Multi-task Training with Hugging Face Transformers and NLP
A recipe for multi-task training with Transformers' Trainer and NLP datasets.
multi-task-learning transformers huggingface natural-language-processing guide recipe article notebook tutorial code

Hugging Face has been building a lot of exciting new NLP functionality lately. The newly released NLP provides a wide coverage of task data sets and metrics, as well as a simple interface for processing and caching the inputs extremely efficiently. They have also recently introduced a Trainer class to the Transformers library that handles all of the training and validation logic.

However, one feature that is not currently supported in Hugging Face's current offerings is multi-task training. While there has been some discussion about the best way to support multi-task training (1, 2), the community has not yet settled on a convention for doing so. Multi-task training has been shown to improve task performance (1, 2) and is a common experimental setting for NLP researchers.

In this Colab notebook, we will show how to use both the new NLP library as well as the Trainer for a multi-task training scheme.

Top collections

Don't forget to tag @zphang in your comment.

Authors community post
Share this project
Similar projects
Transfer Learning with T5: the Text-To-Text Transfer Transformer
In the paper, we demonstrate how to achieve state-of-the-art results on multiple NLP tasks using a text-to-text transformer pre-trained on a large text ...
A utility toolkit enabling NLP developers to easily train and infer a single model for multiple tasks.
MT-Clinical BERT
Scaling Clinical Information Extraction with Multitask Learning
A software toolkit for research on general-purpose text understanding models.