Projects

latest | popular

Filter by
How to Train Your Neural Net
Deep learning for various tasks in the domains of Computer Vision, Natural Language Processing, Time Series Forecasting using PyTorch 1.0+.
pytorch python deep-learning computer-vision
KD Lib
A PyTorch library to easily facilitate knowledge distillation for custom deep learning models.
knowledge-distillation model-compression pytorch code
NLP for Developers: Shrinking Transformers | Rasa
In this video, Rasa Senior Developer Advocate Rachael will talk about different approaches to make transformer models smaller.
model-compression distillation pruning transformers
The Lottery Ticket Hypothesis: A Survey
Dive deeper into the lottery ticket hypothesis and review the literature after the original ICLR best paper award by Frankle & Carbin (2019).
lottery-ticket-hypothesis survey deep-learning article
A Survey of Methods for Model Compression in NLP
A look at model compression techniques applied on base model pre-training to reduce the computational cost of prediction.
model-compression pruning knowledge-distillation precision-reduction
FasterAI
How to make your network smaller and faster with the use of fastai library.
model-compression fastai knowledge-distillation sparsifying
AquVitae: The Easiest Knowledge Distillation Library
AquVitae is a Python library that is the easiest to perform Knowledge Distillation through a very simple API. This library supports TensorFlow and PyTorch. ...
tensorflow pytorch light-weight deep-learning
Distilling Inductive Biases
The power of knowledge distillation for transferring the effect of inductive biases from one model to another.
inductive-bias knowledge-distillation model-compression research
Knowledge Transfer in Self Supervised Learning
A general framework to transfer knowledge from deep self-supervised models to shallow task-specific models.
self-supervised-learning knowledge-distillation model-compression article
projects 1 - 10 of 30
Topic experts
Share a project
Share something you or the community has made with ML.