Skip to content

Code for the paper "Language Models are Unsupervised Multitask Learners"

License

Notifications You must be signed in to change notification settings

jonheng/gpt-2-finetuning

 
 

Repository files navigation

gpt-2-finetuning

This is a package for finetuning GPT-2 models.

It is based on the work done by:

Usage

Install the package

pip install gpt_2_finetuning

Download GPT-2 models through terminal

# Available model sizes: 124M, 355M, 774M
# Example command to download model
download_gpt2_model 124M

Example usage:

Generating samples from GPT-2 model

Finetuning GPT-2 model with Shakespeare's works

Further reading

Code from the paper "Language Models are Unsupervised Multitask Learners".

See more details in our blog post.

GPT-2 samples

WARNING: Samples are unfiltered and may contain offensive content.

License

MIT

About

Code for the paper "Language Models are Unsupervised Multitask Learners"

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%