7

I want to perform a text generation task in a flask app and host it on a web server however when downloading the GPT models the elastic beanstalk managed EC2 instance crashes because the download takes too much time and memory

from transformers.tokenization_openai import OpenAIGPTTokenizer
from transformers.modeling_tf_openai import TFOpenAIGPTLMHeadModel
model = TFOpenAIGPTLMHeadModel.from_pretrained("openai-gpt")
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")

These are the lines in question causing the issue. GPT is approx 445 MB. I am using the transformers library. Instead of downloading the model at this line I was wondering if I could pickle the model and then bundle it as part of the repository. Is that possible with this library? Otherwise how can I preload this model to avoid the issues I am having?

Josh Zwiebel
  • 883
  • 1
  • 9
  • 30

2 Answers2

10

Approach 1:

Search for the model here: https://huggingface.co/models

Download the model from this link:

pytorch-model: https://s3.amazonaws.com/models.huggingface.co/bert/openai-gpt-pytorch_model.bin

tensorflow-model: https://s3.amazonaws.com/models.huggingface.co/bert/openai-gpt-tf_model.h5

The config file: https://s3.amazonaws.com/models.huggingface.co/bert/openai-gpt-config.json

Source: https://huggingface.co/transformers/_modules/transformers/configuration_openai.html#OpenAIGPTConfig

You can manually download the model (in your case TensorFlow model .h5 and the config.json file), put it in a folder (let's say model) in the repository. (you can try compressing the model, and then decompressing once it's in the ec2 instance if needed)

Then, you can directly load the model in your web server from the path instead of downloading (model folder which contains the .h5 and config.json):

model = TFOpenAIGPTLMHeadModel.from_pretrained("model") 
# model folder contains .h5 and config.json
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt") 
# this is a light download

Approach 2:

Instead of using links to download, you can download the model in your local machine using the conventional method.

from transformers.tokenization_openai import OpenAIGPTTokenizer
from transformers.modeling_tf_openai import TFOpenAIGPTLMHeadModel
model = TFOpenAIGPTLMHeadModel.from_pretrained("openai-gpt")
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")

This downloads the model. Now you can save the weights in a folder using save_pretrained function.

model.save_pretrained('/content/') # saving inside content folder

Now, the content folder should contain a .h5 file and a config.json.

Just upload them to the repository and load from that.

Zabir Al Nazi
  • 10,298
  • 4
  • 33
  • 60
  • For some reason, approach 2 (didn't try #1) doesn't work with the tokenizer, at least with the Helsinki translation model. It throws this error `module transformers.models.mbart50 has no attribute MarianTokenizerFast` which is odd. Pre-installing the model this way works great! – Kevin Danikowski Aug 25 '21 at 18:05
5

Open https://huggingface.co/models and search the model you want. Click on the model name and finnaly click on "List all files in model". You will get a list of the files you can download.

Manuel Alves
  • 3,885
  • 2
  • 30
  • 24