Using Different Decoding Methods for LM with Transformers
A look at different decoding methods for generate subsequent tokens in language modeling.
language-modeling decoder transformers huggingface greedy-search beam-search sampling natural-language-processing article code survey tutorial
Resource links
Details
Objectives & Highlights

This blog post gives a brief overview of different decoding strategies and more importantly shows how you can implement them with very little effort using the popular transformers library!

Don't forget to tag @patrickvonplaten in your comment.

Authors
Share this project
Similar projects
The Illustrated BERT, ELMo, and co.
How NLP cracked transfer learning.
AllenNLP Interpret
A Framework for Explaining Predictions of NLP Models
Lecture 10 | Recurrent Neural Networks
Discuss the use of recurrent neural networks for modeling sequence data.
Lazynlp
Library to scrape and clean web pages to create massive datasets.