Finetune: Scikit-learn Style Model Finetuning for NLP
Finetune is a library that allows users to leverage state-of-the-art pretrained NLP models for a wide variety of downstream tasks.
natural-language-processing finetuning pretraining transformers language-modeling
Objectives & Highlights

Finetune currently supports TensorFlow implementations of the following models: • BERT, from "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" • RoBERTa, from "RoBERTa: A Robustly Optimized BERT Pretraining Approach" • GPT, from "Improving Language Understanding by Generative Pre-Training" • GPT2, from "Language Models are Unsupervised Multitask Learners" • TextCNN, from "Convolutional Neural Networks for Sentence Classification" • Temporal Convolution Network, from "An Empirical Evaluation of • Generic Convolutional and Recurrent Networks for Sequence Modeling" • DistilBERT from "Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT"

Don't forget to add the tag @madisonmay in your comments.

Machine Learning Architect at @IndicoDataSolutions
Share this project
Similar projects
practicalAI
A practical approach to machine learning.
A Survey of Long-Term Context in Transformers
Over the past two years the NLP community has developed a veritable zoo of methods to combat expensive multi-head self-attention.
Using Different Decoding Methods for LM with Transformers
A look at different decoding methods for generate subsequent tokens in language modeling.
Custom Classifier on Top of Bert-like Language Model
Take pre-trained language model and build custom classifier on top of it.