Projects

latest | popular

Top Down Introduction to BERT with HuggingFace and PyTorch
I will also provide some intuition into how BERT works with a top down approach (applications to algorithm).
bert top-down huggingface pytorch
Tips for Successfully Training Transformers on Small Datasets
It turns out that you can easily train transformers on small datasets when you use tricks (and have the patience to train a very long time).
transformers small-datasets training ptb
How to Steal Modern NLP Systems with Gibberish?
It’s possible to steal BERT-based models without any real training data, even using gibberish word sequences.
bert adversarial-attacks computer-security adversarial-learning
T5 fine-tuning
A colab notebook to showcase how to fine-tune T5 model on various NLP tasks (especially non text-2-text tasks with text-2-text approach)
natural-language-processing transformers text-2-text t5
The Transformer Family
This post presents how the vanilla Transformer can be improved for longer-term attention span, less memory and computation consumption, RL task solving, ...
attention transformers reinforcement-learning natural-language-processing
Transformers - Hugging Face
🤗 Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.
transformers huggingface attention bert
Finetuning Transformers with JAX + Haiku
Walking through a port of the RoBERTa pre-trained model to JAX + Haiku, then fine-tuning the model to solve a downstream task.
jax haiku roberta transformers
Finetune: Scikit-learn Style Model Finetuning for NLP
Finetune is a library that allows users to leverage state-of-the-art pretrained NLP models for a wide variety of downstream tasks.
natural-language-processing finetuning pretraining transformers
IntelliCode Compose: Code Generation Using Transformer
Code completion tool which is capable of predicting sequences of code tokens of arbitrary types, generating up to entire lines of syntactically correct ...
code-generation transformers natural-language-processing tutorial
Synthesizer: Rethinking Self-Attention in Transformer Models
The dot product self-attention is known to be central and indispensable to state-of-the-art Transformer models. But is it really required?
synthesizers transformers attention natural-language-processing
projects 1 - 10 of 80
Topic experts
Share a project
Share something interesting you found that's made with ML.