1

I am finetuning gpt2 model to answer questions with given faq.json. There is some issue with the answer generated by below code. I am assuming I have not done encoding/decoding of questions and answers correctly.

Code -

import torch
from torch.utils.data import Dataset, DataLoader
from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config

class FAQDataset(Dataset):
def init(self, data_file, tokenizer):
self.tokenizer = tokenizer
self.inputs = 
self.targets = 

    with open(data_file, 'r') as file:
        lines = file.readlines()
        
        for i in range(0, len(lines)-1, 2):
            question = lines[i].strip()
            answer = lines[i+1].strip()
            self.inputs.append(question)
            self.targets.append(answer)

def __len__(self):
    return len(self.inputs)

def __getitem__(self, index):
    inputs = self.tokenizer.encode(self.inputs[index], add_special_tokens=True)
    targets = self.tokenizer.encode(self.targets[index], add_special_tokens=True)
    return torch.tensor(inputs), torch.tensor(targets)
Load the GPT-2 tokenizer and model
tokenizer = GPT2Tokenizer.from_pretrained(‘gpt2’)
model = GPT2LMHeadModel.from_pretrained(‘gpt2’)

Load the training dataset
dataset = FAQDataset(‘faq.txt’, tokenizer)

Define the training parameters
batch_size = 4
num_epochs = 3
learning_rate = 1e-5

Create the data loader
data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=True)

Set the model in training mode
model.train()

Define the optimizer and the loss function
optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate)
criterion = torch.nn.CrossEntropyLoss(ignore_index=tokenizer.pad_token_id)

Fine-tune the model
for epoch in range(num_epochs):
total_loss = 0

for inputs, targets in data_loader:
    optimizer.zero_grad()

    # Forward pass
    outputs = model(inputs, labels=targets)
    loss = criterion(outputs.logits.view(-1, tokenizer.vocab_size), targets.view(-1))

    # Backward pass and optimization
    loss.backward()
    optimizer.step()

    total_loss += loss.item()

avg_loss = total_loss / len(data_loader)
print(f"Epoch {epoch+1}/{num_epochs}, Loss: {avg_loss}")
Save the fine-tuned model
model.save_pretrained(‘fine-tuned-gpt2’)
tokenizer.save_pretrained(‘fine-tuned-gpt2’)

Question with user input and generated output -

import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer

Load the fine-tuned model and tokenizer
model = GPT2LMHeadModel.from_pretrained(‘fine-tuned-gpt2’)
tokenizer = GPT2Tokenizer.from_pretrained(‘fine-tuned-gpt2’)

Set the model to evaluation mode
model.eval()

User input
user_question = “Where is Paris?”

Generate the answer using the fine-tuned model
input_ids = tokenizer.encode(f"Q: {user_question}\nA:", return_tensors=‘pt’)
output = model.generate(input_ids, max_length=100, num_return_sequences=1)
generated_answer = tokenizer.decode(output[:, input_ids.shape[-1]:][0], skip_special_tokens=True)

print(generated_answer)

Answer generated is !!! !!!

Any help please?

Faq.txt looks like this:

Q: 'Where is Paris?'
A: 'Paris is in France.'
Q: 'Where is Athens'
A: 'Greece'
tagg
  • 383
  • 4
  • 7

0 Answers0