1

Given the following code. why does the function: model.generate() returns a summary, where does it order to do summary and not some other task? where can I see the documentation for that as well.

model_name = ‘google/flan-t5-base’
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
dataset_name = “knkarthick/dialogsum”
dataset = load_dataset(dataset_name)

for i in example_indices:
  dialog = dataset[‘test’][i][‘dialogue’]
  input = tokenizer(dialog,sentence,return_tensors=‘pt’)

  ground_truth = dataset[‘test’][i][‘summary’]

  model_summary = model.generate(input[‘input_ids’],max_new_tokens=50)
  summary = tokenizer.decode(model_summary[0],skip_special_tokens=True)
  print(summary)
Zero
  • 1,807
  • 1
  • 6
  • 17
user552231
  • 1,095
  • 3
  • 21
  • 40

2 Answers2

2

Well, it's all in the dataset:

dataset_name = “knkarthick/dialogsum”

DialogSum: A Real-life Scenario Dialogue Summarization Dataset
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues with corresponding manually labeled summaries and topics.

Transformer based models like T5, which you are using, are not explicitly told what to do at the time of inference. They learn to map from an input sequence to an output sequence. During training, the model was frequently exposed to a certain pattern (input: dialog, output: summary). Now when you provide it with a similar input during inference, it is likely to produce a similar output.

So to summarize (no pun intended), this isn't any default behaviour for model.generate. It's just how your training dataset is used.

NullDev
  • 6,739
  • 4
  • 30
  • 54
0

FLAN-T5's ability to generate summaries for the dialogues in the dialogsum dataset is a result of its multitask fine-tuning process. During multitask fine-tuning, FLAN-T5 has been trained on a diverse range of tasks, including summarization, review rating, code translation, and entity recognition, among others. This training involves providing the model with examples and instructions for each task, guiding it on how to respond appropriately.

In the case of the dialogsum dataset, the fine-tuning process has taught FLAN-T5 to recognize and respond to prompts or instructions that specifically ask for a summary of a given conversation. The fine-tuning dataset likely contains numerous examples where the model has learned to generate summaries based on prompts like "Summarize the conversation," "Briefly summarize the dialogue," or other similar phrasings.

These instructions are repeated across the training data, allowing the model to associate such prompts with the task of generating summaries. As a result, when presented with a conversation from the dialogsum dataset and a prompt that explicitly asks for a summary, FLAN-T5's fine-tuned knowledge directs it to perform the summarization task rather than any other task it might have learned during the multitask fine-tuning process.

In essence, FLAN-T5's ability to generate summaries on the dialogsum dataset is a product of its training history and the consistent reinforcement of summarization prompts during the fine-tuning process. This targeted training ensures that FLAN-T5 is capable of responding appropriately to instructions related to generating summaries, even when presented with conversations it hasn't encountered before.

Sheykhmousa
  • 139
  • 9