Find answers from the community

S
Shayan
Offline, last seen 3 months ago
Joined September 25, 2024
How can I make a BasicChatEngine from the retriever from an auto-retriever mentioned here https://docs.llamaindex.ai/en/stable/examples/vector_stores/pinecone_auto_retriever/
1 comment
W
Where would I need to check or modify in the BasicChatEngine python backend generated by the create-llama starter(GitHub - run-llama/create-llama: The easiest way to get started wit...) in order to incorporate an embedding's metadata field stored in a vector store?
2 comments
S
L
I'm using mongodb as vector store and llamaindex backend can find information when I have only a few embedded documents. But when number of embeddings increases, llamaindex backend can no longer find anything, not even the same info that it was able to retrieve before growing the vector store. Any advice would be much appreciated.
19 comments
W
S
k
b
I have a private dataset in a vectorsearch database which I can query fine using a chat engine in "context" mode but i get hallucinations when chat engine is switched to "best" or "ReAct" mode. Any advice on how to address this?
2 comments
S
W