It is hard to give a good summary of all that happens in GPT-3 but i will try.
First the model encodes the word Quack into token representations, these tokens have an embedding representation, the tokens are later passed through the decoder components of the model passing through several neural network layers. Once the first decoder transformer block processes the token, it sends its resulting vector up the stack to be processed by the next block. The process is identical in each block, but each block has its own weights in both self-attention and the neural network sublayers. In the end you end up with an array of output token probabilities and you use the combined (or parts of the) array to select what the model considers as the most optimal combination of tokens for the output. These tokens are decoded back into normal text and you get your rant against cell therapy back.
The result varies depending of the engine, temperature and logit biases that are feed in the request.
I recommend reading the following two links for getting more insights about what happens internally, both written by the brilliant Jay Alammar.
https://jalammar.github.io/how-gpt3-works-visualizations-animations/
https://jalammar.github.io/illustrated-gpt2/