Projects

latest | popular

Filter by
NLP for Developers: Shrinking Transformers | Rasa
In this video, Rasa Senior Developer Advocate Rachael will talk about different approaches to make transformer models smaller.
model-compression distillation pruning transformers
TextBrewer
A PyTorch-based model distillation toolkit for natural language processing.
model-distillation natural-language-processing model-compression distillation
TinyBERT
TinyBERT is 7.5x smaller and 9.4x faster on inference than BERT-base and achieves competitive performances in the tasks of natural language understanding.
bert tinybert distillation transformers
Compressing Bert for Faster Prediction
In this blog post, we discuss ways to make huge models like BERT smaller and faster.
bert model-compression compression inference
projects 1 - 4 of 4
Topic experts
Share a project
Share something you or the community has made with ML.