Find answers from the community

Updated 10 months ago

i use OpenAILike as llm, so im thinking

At a glance

The community member is using OpenAILike as their language model (LLM) and vLLM as an interference server. They are experiencing issues with the context and the model going back and forth. The comments suggest that setting is_chat_model=True in the LLM may help, which the community member seems to have fixed. Additionally, updating vLLM to handle special characters also helped. The community members discuss the prompt as a potential issue, and one member suggests that setting is_chat_model=True in the LLM should allow vLLM to handle the prompt formatting.

i use OpenAILike as llm, so im thinking thats why, but im not sure what other llm to use since i use vLLM as interference server
L
B
6 comments
Did you set is_chat_model=True ?
nvm you fixed it haha
yeah i fixed the special characters by updating vLLM, but i still have issue with it saying the context and going back and forward with itself
i read somewhere you mentioned it could be the prompt, i havent been successful in doing that with my chat engine if that might help
From my understanding, if you set is_chat_model=True in the LLM, vLLM will handle the prompt formatting for you
oh wow it seems that this did it ❀️ thank you so much once again πŸ˜„
Add a reply
Sign up and join the conversation on Discord