Hi
I have exactly the same problem as @intvijay
I created a LLamaIndex tools that I provided to langchain through create_llama_chat_agent / initialize_agent (similar to
https://github.com/jerryjliu/llama_index/blob/main/examples/chatbot/Chatbot_SEC.ipynb); it works well to retrieve the information from the tool
The problem is that if it didn't find the information in the tool/index, it would just use its internal knowledge and still answer, instead of saying that it doesn't know
When using only LlamaIndex, when asking a question for which no information is found in the indices, the LLM would answer something along the lines of
Based on the context information provided, there is no information available.
, while with langchain it would still respond something (and possibly hallucinate)
I'm trying to get reliable information out of the indices, but with langchain and the LlamaIndex indices, it is possible to get a bad answer for now (and possibly for it to look quite good if you're not an expert on the subject)