Find answers from the community

Updated 2 years ago

Probably the LLM stopped following

At a glance

The post discusses an issue with the LangChain library, where the LLM (Language Model) stopped following instructions and produced output that LangChain couldn't parse. The community members suggest that this is a common error with LangChain and that the parsing code for the specific agent might need to be less brittle. They also mention that the only options are to make a pull request or improve the tool instructions.

In the comments, the community members discuss the specific LLM they are using, which is ChatOpenAI with the GPT-3.5 model. They express frustration with the model's performance, suggesting that it has been downgraded. Some community members propose using the text-davinci-003 model or waiting for the open-source LLaMA model to become available as alternatives. They also discuss the Camel model from Hugging Face as a potentially good open-source and commercial option, though it requires significant GPU resources to run.

Useful resources
Probably the LLM stopped following instructions and printed some output that langchain couldn't parse

Pretty common error with langchain tbh. The parsing code for that specific agent it here https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py

Langchain at some post probably needs less-brittle parsing. Not much to do about it besides making a PR or maybe improving the tool instructions
https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/prompt.py
p
L
13 comments
yeah, it sucks... I asked 5 questions and all of them fail with such an exception. I'm really struggling to ask cross-index questions.

When a question requires only one index, it works fine, but when it requires multiple indices, it always fails :/
which LLM are you using?
with gpt-3.5 I'm assuimg? Yea, that model is... not great lately haha I honestly think they downgraded it.
(conspiracy theory to make everyone using text-davinci-003 or gpt-4 LOL)
@Logan M jajajaja. You mean by using OpenAI(model_name="text-davinci-003") , I can overcome the issue?
maybe! But it will cost a bit more 😦 ... 10x more actually πŸ’Έ
Yeah with gpt-3.5-turbo. Btw, nothing turbo about it lol. It is slow as hell πŸ™‚
@Logan M really waiting for Llama model to become open-source or this model to be published. So we could get rid of openAI addiction πŸ˜–

https://www.nextbigfuture.com/2023/04/red-pajama-is-a-1-2-trillion-token-large-language-model.html
so many people making open-source models, but yea most of them are non-commerical 😦

I did actually have some good experience with this one (actually open source/commerical!), assuming you have enough GPU to run it:
https://huggingface.co/Writer/camel-5b-hf
sheeesh! Gonna try that out definitely! I was quite impressed by Vicuna tho, but as you wrote, it is non-commercial πŸ‘
@Logan M wew! Camel goes brrr πŸ’―
Attachment
image.png
Add a reply
Sign up and join the conversation on Discord