I am trying to replicate the results of this demo, whose author primes GPT-3 with just the following text:
gpt.add_example(Example('apple', 'slice, eat, mash, cook, bake, juice'))
gpt.add_example(Example('book', 'read, open, close, write on'))
gpt.add_example(Example('spoon', 'lift, grasp, scoop, slice'))
gpt.add_example(Example('apple', 'pound, grasp, lift'))
I only have access to GPT-2, via the Huggingface Transformer. How can I prime GPT-2 large on Huggingface to replicate the above examples? The issue is that, with this, one doesn't get to prime with the input and corresponding output separately (as the author of the GPT-3 demo did above).
Similarly, this tutorial describes using Huggingface, but there's no example which clearly shows how you can prime it using input vs output examples.
Does anyone know how to do this?
Desired output: use GPT-2 to return something like, for input "potato", output "peel, slice, cook, mash, bake" (as in the GPT-3 demo: https://www.buildgpt3.com/post/41/). Obviously the exact list of output verbs won't be the same as GPT-2 and GPT-3 are not identical models.