NLP for Developers: Shrinking Transformers | Rasa
In this video, Rasa Senior Developer Advocate Rachael will talk about different approaches to make transformer models smaller.
model-compression distillation pruning transformers quantization natural-language-processing tutorial video

Don't forget to tag @rctatman in your comment, otherwise they may not be notified.

Authors community post
Developer Advocate helping folks build conversional interfaces @ Rasa
Share this project
Similar projects
Compressing Bert for Faster Prediction
In this blog post, we discuss ways to make huge models like BERT smaller and faster.
A PyTorch-based model distillation toolkit for natural language processing.
The Lottery Ticket Hypothesis: A Survey
Dive deeper into the lottery ticket hypothesis and review the literature after the original ICLR best paper award by Frankle & Carbin (2019).
How to Train Your Neural Net
Deep learning for various tasks in the domains of Computer Vision, Natural Language Processing, Time Series Forecasting using PyTorch 1.0+.
Top collections