Where would I need to check or modify in the BasicChatEngine python backend generated by the create-llama starter(GitHub - run-llama/create-llama: The easiest way to get started wit...) in order to incorporate an embedding's metadata field stored in a vector store?
I'm using mongodb as vector store and llamaindex backend can find information when I have only a few embedded documents. But when number of embeddings increases, llamaindex backend can no longer find anything, not even the same info that it was able to retrieve before growing the vector store. Any advice would be much appreciated.
I have a private dataset in a vectorsearch database which I can query fine using a chat engine in "context" mode but i get hallucinations when chat engine is switched to "best" or "ReAct" mode. Any advice on how to address this?