1

I used to run this script using GPUs on GCP, but I am now trying to implement it using TPUs. As far as I am concerned, TPUs should now be working fine with the transformers pipeline.

However, trying to set the device parameter throws RuntimeError: Cannot set version_counter for inference tensor

from transformers import pipeline
import torch
import torch_xla
import torch_xla.core.xla_model as xm

classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True, device=device)

def detect_emotions(emotion_input):
    
    """Model Inference Section"""
    prediction = classifier(emotion_input,)
    output = {}
    
    for emotion in prediction[0]:
        output[emotion["label"]] = emotion["score"]   
    return output


detect_emotions('Rest in Power: The Trayvon Martin Story’ takes an emotional look back at the shooting that divided a nation')

How would this be rectified? What does this error even mean?

DarknessPlusPlus
  • 543
  • 1
  • 5
  • 18
  • 1
    From pytorch.org/docs/stable/notes/autograd.html: "Every tensor keeps a version counter, that is incremented every time it is marked dirty in any operation. When a Function saves any tensors for backward, a version counter of their containing Tensor is saved as well." Inference tensors do not include version_counters (pytorch.org/cppdocs/notes/inference_mode.html) so perhaps there is somewhere in the code with a backward operation. – gkroiz Feb 14 '23 at 16:37

0 Answers0