ELECTRA: Pre-training Text Encoders as Discriminators
PyTorch implementation of the electra model from the paper: ELECTRA - Pre-training Text Encoders as Discriminators Rather Than Generators
language-modeling discriminators generators pytorch natural-language-processing
Objectives & Highlights

PyTorch implementation of the electra model from the paper: ELECTRA - Pre-training Text Encoders as Discriminators Rather Than Generators. This pre-training method proves to be significantly more efficient and than masked language modeling.

Don't forget to add the tag @lonePatient in your comments.

Anything is possible, never say never. weibo: https://weibo.com/277974397
Share this project
Similar projects
Using Different Decoding Methods for LM with Transformers
A look at different decoding methods for generate subsequent tokens in language modeling.
Custom Classifier on Top of Bert-like Language Model
Take pre-trained language model and build custom classifier on top of it.
Finetune: Scikit-learn Style Model Finetuning for NLP
Finetune is a library that allows users to leverage state-of-the-art pretrained NLP models for a wide variety of downstream tasks.