The post discusses an issue with the LangChain library, where the LLM (Language Model) stopped following instructions and produced output that LangChain couldn't parse. The community members suggest that this is a common error with LangChain and that the parsing code for the specific agent might need to be less brittle. They also mention that the only options are to make a pull request or improve the tool instructions.
In the comments, the community members discuss the specific LLM they are using, which is ChatOpenAI with the GPT-3.5 model. They express frustration with the model's performance, suggesting that it has been downgraded. Some community members propose using the text-davinci-003 model or waiting for the open-source LLaMA model to become available as alternatives. They also discuss the Camel model from Hugging Face as a potentially good open-source and commercial option, though it requires significant GPU resources to run.
so many people making open-source models, but yea most of them are non-commerical π¦
I did actually have some good experience with this one (actually open source/commerical!), assuming you have enough GPU to run it: https://huggingface.co/Writer/camel-5b-hf