Currently, when using an LLMChain
in LangChain, I can get the template prompt used and the response from the model, but is it possible to get the exact text message sent as query to the model, without having to manually do the prompt template filling?
An example:
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
llm = OpenAI(model_name="gpt-3.5-turbo-0613")
prompt = PromptTemplate(input_variables=["a", "b"], template="Hello {a} and {b}")
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.call({"a": "some text", "b": "some other text"})
I cannot find something like I am looking for in the chain
or result
objects. I tried some options such as return_final_only=True
and include_run_info=True
but they don't include what I am looking for.