I'm working with LlamaIndex, and I need to extract the context_str
that was used in a query before it was sent to the LLM (Language Model). Here's the relevant code:
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context, service_context=service_context
)
query_engine = index.as_query_engine()
response = query_engine.query("was July 2023 recorded as the hottest day on Earth?")
display(Markdown(f"<b>{response}</b>"))
The output of this code is as follows:
July 4, 2023 was recorded as the hottest day on Earth since at least 1979, according to data from the U.S. National Centers for Environmental Prediction. Some scientists believe it may have been one of the hottest days on Earth in about 125,000 years.
I understand that the Llama query sent a similar prompt like this to LLM:
prompt(
"We have provided context information below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
"Do not give me an answer if it is not mentioned in the context as a fact. \n"
"Given this information, please provide me with an answer to the following:\n{query_str}\n"
)
how can I extract the context_str
from the prompt that was used to generate the response?