Hi I am using Vector store index to build indexes then using Azure Open Ai for QnA. When I am doing qna like "What is auditor reports?". Difference sources nodes are coming but when I am just changing one word like "What is auditors reports?" I am getting correct answer. Just changing one word I am getting different nodes. How can I solve this issue?
@theOldPhilosopher the pre-processing can be done before sending the query to the index. These will work well for keyword index and maynot work as well for a vector index. Hence, a hybrid approach work much better here.
Hi @Logan M I want to know working of llama_index properly like what's the difference between langchain and llama_index? Why I should use llama_index actually I am not that clear about why I should use llama_index instead of langchain. Semantic search is in qdarnt also.So, if it possible can you help me clearing this doubt. It will be great help. Thanks
Qdrant (and also langchain) don't allow for more complex query structures. With llama index, you can use query engines on top of your index like sub question query engine and router query engine.
Furthermore, these query engines can all be used in agents (which we have some more news on that later today π).
Compared to langchain, I'd say llamaindex is more customizable. Retrievers, node postprocessors, response synthesizers all come together to form a query engine, and you can customize each piece.
Okay, so in essence llama_index is more customizable then langchain. And especially when querying we can do a lot more things using llama_index in compare of others