0

I am trying to use Huggingface generate() function for sequence generation task. My model uses encoder-decoder architecture, where I can't really do prompting. But what I can do is forcing the model to start generating tokens right after the prompt. The output should contain the completed-text. Basically, I want to provide a context & a prompt to a decoder. Anyone knows how can I achieve this ?

0 Answers0