Finetune: Scikit-learn Style Model Finetuning for NLP
Finetune is a library that allows users to leverage state-of-the-art pretrained NLP models for a wide variety of downstream tasks.
natural-language-processing finetuning pretraining transformers language-modeling toolkit
Resource links
Details
Objectives & Highlights

Finetune currently supports TensorFlow implementations of the following models: • BERT, from "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" • RoBERTa, from "RoBERTa: A Robustly Optimized BERT Pretraining Approach" • GPT, from "Improving Language Understanding by Generative Pre-Training" • GPT2, from "Language Models are Unsupervised Multitask Learners" • TextCNN, from "Convolutional Neural Networks for Sentence Classification" • Temporal Convolution Network, from "An Empirical Evaluation of • Generic Convolutional and Recurrent Networks for Sequence Modeling" • DistilBERT from "Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT"

Don't forget to tag @madisonmay in your comment.

Authors
Machine Learning Architect at @IndicoDataSolutions
Share this project
Similar projects
Self Supervised Representation Learning in NLP
An overview of self-supervised pretext tasks in Natural Language Processing
Machine Learning Basics
A practical set of notebooks on machine learning basics, implemented in both TF2.0 + Keras and PyTorch.
The Big Bad NLP Database
A collection of 400+ NLP datasets with papers included.
BLINK: Better entity LINKing
Entity Linking python library that uses Wikipedia as the target knowledge base.