hope you're having a great day. I would really appreciate if anyone here has the time to help me understand memory in LangChain.
At a high level, what I want to be able to do is save the state of an entire conversation to a JSON file on my own machine — including the prompts from a ChatPromptTemplate.
When I use conversation.memory.chat_memory.messages
, and messages_to_dict(extracted_messages)
, I'm only getting a subset of the conversation.
This is what I have so far, using some Nickelodeon prompting text as an example:
import json
import openai
from langchain import LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.prompts import (
AIMessagePromptTemplate,
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage,
messages_from_dict,
messages_to_dict,
)
OPENAI_API_KEY = "sk-****"
story_context = "Who is the main character in The Fairly OddParents?"
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"You are an expert on Nickelodeon cartoons."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template(
'Below you will find a question about Nickelodeon cartoons delimited by triple quotes ("""). Answer the question in a consistent style, tone and voice.\n\n"""What is the name of the main character in the cartoon Spongebob Squarepants?"""'
),
AIMessagePromptTemplate.from_template(
"Spongebob Squarepants."
),
HumanMessagePromptTemplate.from_template(
'Now do the same for this snippet, following a consistent style, tone and voice.\n\n"""{text}"""'
),
]
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm = ChatOpenAI(
openai_api_key=OPENAI_API_KEY,
model="gpt-3.5-turbo",
temperature=0,
max_retries=5,
)
# Create the LLMChain.
conversation = LLMChain(llm=llm, prompt=prompt, verbose=False, memory=memory)
conversation.predict(text=story_context)
# Let's save the conversation to a dictionary
extracted_messages = conversation.memory.chat_memory.messages
memory_dict = messages_to_dict(extracted_messages)
# Pretty print the dictionary
print(json.dumps(memory_dict, indent=2))
The above code prints the following to my terminal:
[
{
"type": "human",
"data": {
"content": "Who is the main character in The Fairly OddParents?",
"additional_kwargs": {},
"example": false
}
},
{
"type": "ai",
"data": {
"content": "The main character in The Fairly OddParents is Timmy Turner.",
"additional_kwargs": {},
"example": false
}
}
]
This isn't the entire contents of the conversation. I also want the system and human prompts from the ChatPromptTemplate ("You are an expert on Nickelodeon cartoons.", "Below you will find a question about Nickelodeon cartoons …" ).
I don't believe extracted_messages = conversation.memory.chat_memory.messages
will get me to where I need to go, but I don't know any other way to go about this.
Like I said, I would really appreciate any and all help on this. I feel like I'm going crazy trying to figure it out!