0

The problem I am facing is after defining the prompt template, creating a chain using Langchain, defining the huggingface evaluation module from trulens_eval to check the toxicity of the response and then when finally passing the prompt through truchain, the response that I am getting is incomplete response like truncated response.

I am pasting the code that I tried. Here is the code below:-

from langchain import PromptTemplate
from langchain.chains import  LLMChain
from langchain.prompts.chat import (ChatPromptTemplate,HumanMessagePromptTemplate)
from langchain import HuggingFaceHub
from langchain.chat_models import ChatOpenAI
from trulens_eval import TruChain

full_prompt = HumanMessagePromptTemplate(
    prompt=PromptTemplate(
        template="Please provide detailed helpful response with relevant background information for the following: {prompt}. Provide a complete paragraph of the response",
            input_variables=["prompt"],
        )
    )
chat_prompt_template = ChatPromptTemplate.from_messages([full_prompt])
model = HuggingFaceHub(repo_id='tiiuae/falcon-7b-instruct', model_kwargs={"temperature":0.5})
chain = LLMChain(llm=model, prompt=chat_prompt_template)

from trulens_eval import Feedback, Huggingface, Query
hugs=Huggingface()
f_toxicity=Feedback(hugs.not_toxic).on(text=Query.RecordOutput)

truchain=TruChain(chain,app_id="testapp_validation",feedbacks=[f_toxicity])
llm_response3=truchain("What is Machine Learning and Artificial Intelligence")
display(llm_response3)

Output is as follows:

{'prompt': 'What is Machine Learning and Artificial Intelligence',
 'text': 'Machine learning is the process of learning to do things by analyzing data and incorporating it into'}.

As you can see the output here is a truncated response.

Charles Duffy
  • 280,126
  • 43
  • 390
  • 441
  • 1
    How many tokens of response are you asking for? You generally don't want to rely on the defaults. – Charles Duffy Jul 23 '23 at 15:07
  • 1
    And use correct code formatting. It takes **three** backticks, on a line of their own, to start or end a multi-line code segment. – Charles Duffy Jul 23 '23 at 15:07
  • Might be unrelated but you'd want to check the default stop sequence. – doneforaiur Jul 23 '23 at 15:17
  • I am calling the huggingface API key token. Before the starting of the code, I have defined my Huggingface API tokens. – RAUNAK GHOSH Jul 23 '23 at 15:31
  • If I need to mention the tokens , where should I mention it. – RAUNAK GHOSH Jul 23 '23 at 15:33
  • any help on that? @Charles Duffy – RAUNAK GHOSH Jul 23 '23 at 22:20
  • I don't use LangChain, so I don't know the local equivalents; but generally how many tokens of new content to generate (after the part of the context window used to by input) is configurable. (This has nothing to do with API keys; it's "tokens" in the sense of tokenization). Read the docs for the APIs you're using; it'll be in there somewhere. – Charles Duffy Jul 23 '23 at 23:22
  • Can you tell me an alternative to this? Apart from LangChain, what are the other ways with which we can do? – RAUNAK GHOSH Jul 24 '23 at 06:57

0 Answers0