The problem I am facing is after defining the prompt template, creating a chain using Langchain, defining the huggingface evaluation module from trulens_eval to check the toxicity of the response and then when finally passing the prompt through truchain, the response that I am getting is incomplete response like truncated response.
I am pasting the code that I tried. Here is the code below:-
from langchain import PromptTemplate
from langchain.chains import LLMChain
from langchain.prompts.chat import (ChatPromptTemplate,HumanMessagePromptTemplate)
from langchain import HuggingFaceHub
from langchain.chat_models import ChatOpenAI
from trulens_eval import TruChain
full_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="Please provide detailed helpful response with relevant background information for the following: {prompt}. Provide a complete paragraph of the response",
input_variables=["prompt"],
)
)
chat_prompt_template = ChatPromptTemplate.from_messages([full_prompt])
model = HuggingFaceHub(repo_id='tiiuae/falcon-7b-instruct', model_kwargs={"temperature":0.5})
chain = LLMChain(llm=model, prompt=chat_prompt_template)
from trulens_eval import Feedback, Huggingface, Query
hugs=Huggingface()
f_toxicity=Feedback(hugs.not_toxic).on(text=Query.RecordOutput)
truchain=TruChain(chain,app_id="testapp_validation",feedbacks=[f_toxicity])
llm_response3=truchain("What is Machine Learning and Artificial Intelligence")
display(llm_response3)
Output is as follows:
{'prompt': 'What is Machine Learning and Artificial Intelligence',
'text': 'Machine learning is the process of learning to do things by analyzing data and incorporating it into'}.
As you can see the output here is a truncated response.