The software he developed a software for natural language processing is able to predict the first 10/20 characters, but it always returns the same output for the remaining 180 characters
text_prompt = 'I’ve used these features of Git for years and I have no idea why they are not used more often.'
input_tokens = tokenizer(text_prompt, return_tensors="pt").to(0)
result_sample = model_lm.generate(**input_tokens, max_length=200, top_k=0, temperature=0.2)
tokenizer.decode(result_sample[0], truncate_before_pattern=[r"\n\n^#","^''", '\n\n\n'])
output:
I’ve used these features of Git for years and I have no idea why they are not used more often. I think they are a great tool for developers and a great tool for the user. I think they are a great tool for developers and a great tool for the user. I think they are a great tool for developers and a great tool for the user. I think they are a great tool for developers and a great tool for the user. I think they are a great tool for developers and a great tool for the user. I think they are a great tool for developers and a great tool for the user. I think they are a great tool for developers and a great tool for the user. I think they are a great tool for developers and a great tool for the user. I think they are a great tool for developers and a great tool for the user. I think they are a great tool for developers and a great tool for the user. I think they are a great tool for
How can I break this cycle? Or can I have the returned characters in the loop only be returned once?