0

Currently, when using an LLMChain in LangChain, I can get the template prompt used and the response from the model, but is it possible to get the exact text message sent as query to the model, without having to manually do the prompt template filling?

An example:

from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

llm = OpenAI(model_name="gpt-3.5-turbo-0613")
prompt = PromptTemplate(input_variables=["a", "b"], template="Hello {a} and {b}")
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.call({"a": "some text", "b": "some other text"})

I cannot find something like I am looking for in the chain or result objects. I tried some options such as return_final_only=True and include_run_info=True but they don't include what I am looking for.

cserpell
  • 716
  • 1
  • 7
  • 17

2 Answers2

0

pass verbose=True to LLMChain constructor

chain = LLMChain(prompt=..., llm=..., verbose=True)

but the problems is that it just prints thru stdout.

i'm also finding how to get exact used prompt string

adoji
  • 26
  • 2
0

Here is the way to see that:

LLMChain(prompt=prompt,llm=llm).prompt.format_prompt(your_prompt_variables_here).to_string()
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Aug 18 '23 at 14:46
  • That gets a string that should be similar to the one sent to the model, but, how could I be sure that _that_ was the text sent in the API call? – cserpell Aug 23 '23 at 01:24