Find answers from the community

Updated 3 months ago

Hi team, i am working in RAG pipeline to

Hi team, i am working in RAG pipeline to chat with my documents. I loaded "Mistral-7B-Instruct-v0.1-GGUF" through LlamaCPP under llama_index.llms . I am using VectorStoreIndex of llama_index to store vectors and GTE (thenlper/gte-large) for text embedding.

I am getting response as for each query as "########.....". I have executed returned index VectorStoreIndex from as as_query_engine

FYI, 2 days earlier i used to get output. Any idea?
W
L
2 comments
Have you made some changes in the instruction? Like adding a ending or starting sequence?
I've seen some people had to downgrade theur llama-cpp-python version, it might be buggy in latest releases
Add a reply
Sign up and join the conversation on Discord