0

I am using Transformers and DistilBert for text classification. My dataset is 700000 rows and It is a bit heavy. I am running my code on Google colab. I used this code before building my model.

X = dfreadtrain['review_text'].values
y = dfreadtrain['rating'].values
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=train_y, random_state=42, shuffle=True)
tokenizer = DistilBertTokenizer.from_pretrained(MODEL_NAME)
train_encodings = tokenizer(list(x_train),truncation=True,padding=True)
test_encodings = tokenizer(list(x_test), truncation=True, padding=True)
print(type(train_encodings))

It took me many hours to run this part but as you know Google colab stops the session and I loose them. Is it possible to record train_encodings and test_encodings on a file? Those are <class 'transformers.tokenization_utils_base.BatchEncoding'> objects.

Many thanks in advance.

GSandro_Strongs
  • 773
  • 3
  • 11
  • 24

0 Answers0