1

Please help me with this issue. I've been trying for the last few days to solve it without any success. I am calling the LLM via LangChain:

calling openai via Langchain

response = llm.call_as_llm(f"{qdocs} Question: Please list all your
shirts with sun protection in a table in markdown and summarize each one.") display(Markdown(response))

The code take 5 minutes to run and as you can see no results get displayed in Markdown. However when I use the same request using openAI, everything works fine as you can see below. calling openai directly chat_completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": f"{qdocs} Question: Please list all your
shirts with sun protection in a table in markdown and summarize each one."}]) display(Markdown(chat_completion.choices[0].message.content))

What's the issue here? Why I can't use LangChain and its chains for this task? I am not getting any errors, it's just taking a lot of time and failing eventually

I encountered this issue in the course LLM for Application Development here: https://learn.deeplearning.ai/langchain/lesson/5/question-and-answer

0 Answers0