Find answers from the community

Home
Members
vinayak.pevekar
v
vinayak.pevekar
Offline, last seen 3 months ago
Joined September 25, 2024
Hi team, i am working in RAG pipeline to chat with my documents. I loaded "Mistral-7B-Instruct-v0.1-GGUF" through LlamaCPP under llama_index.llms . I am using VectorStoreIndex of llama_index to store vectors and GTE (thenlper/gte-large) for text embedding.

I am getting response as for each query as "########.....". I have executed returned index VectorStoreIndex from as as_query_engine

FYI, 2 days earlier i used to get output. Any idea?
2 comments
L
W