0

basically I am trying to have gpt2 respond to a prompt in the variable {text} and I am running into this error:

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

here is my code thus far:

import gradio as gr
from transformers import pipeline, GPT2Tokenizer, GPT2LMHeadModel

tokenizer = GPT2Tokenizer.from_pretrained('gpt2')#gpt2-xl #for very powerful model
model = GPT2LMHeadModel.from_pretrained('gpt2', pad_token_id=tokenizer.eos_token_id)

text = "what is natural language processing?"
encoded_input = tokenizer.encode(text, return_tensors='pt')

#print(tokenizer.decode((encoded_input[0][0]))) # works well to here

def generate_text(inp):
    input_ids = tokenizer.encode(inp, return_tensors='tf')
    beam_output = model.generate(input_ids, max_length=100, num_beams=5, no_repeat_ngram_size=2, early_stopping=True)
    output = tokenizer.decode(beam_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
    return ".".join(output.split(".")[:-1]) + "."

output_text = gr.outputs.Textbox() # works well to here
text1 = generate_text(text) # BREAKS HERE

Could anyone help me figure out what I'm doing wrong? Thanks.

1 Answers1

0

It seems like you are using return_tensors='tf' instead of return_tensors='pt'.

As per the documentation link

return_tensors (str, optional, defaults to None) – Can be set to ‘tf’ or ‘pt’ to return respectively TensorFlow tf.constant or PyTorch torch.Tensor instead of a list of python integers.

The following code works for me:

import gradio as gr
from transformers import pipeline, GPT2Tokenizer, GPT2LMHeadModel

tokenizer = GPT2Tokenizer.from_pretrained('gpt2')#gpt2-xl #for very powerful model
model = GPT2LMHeadModel.from_pretrained('gpt2', pad_token_id=tokenizer.eos_token_id)

text = "what is natural language processing?"
encoded_input = tokenizer.encode(text, return_tensors='pt')


def generate_text(inp):
    input_ids = tokenizer.encode(inp, return_tensors='pt')
    beam_output = model.generate(input_ids, max_length=100, num_beams=5, no_repeat_ngram_size=2, early_stopping=True)
    output = tokenizer.decode(beam_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
    return ".".join(output.split(".")[:-1]) + "."

output_text = gr.outputs.Textbox() # works well to here
text1 = generate_text(text) # NOW IT WORKS!!!

Generated text from the model:

what is natural language processing?

This is a question that has been debated for a long time, and I think it's important to understand what it is that we're talking about here. It's not something that's going to happen overnight, but it will happen in a very, very short period of time. We've got to be very careful about what we say and how we talk about it, because if we don't, it can be misinterpreted as a sign of weakness.

Colab demo

Nomiluks
  • 2,052
  • 5
  • 31
  • 53