Hello! As many others I'm trying to build a "chatbot" to query a local datasource. But I'm a little lost to which would be the best approach with llama index. I have tried the retriever and citation query engines with text-davinci-003 and it works quite fine except that I don't get the chat experience since it doesn't remember the conversation history. So I tried using a chat engine (index.as_chat_engine) but then I don't get any sources. Anyone has any suggestions?
@Senna Yes, actually with chat_mode='condense_question' I do get content in response.source_nodes. When I use chat_mode='react' the response.source_nodes is just an empty list. However, the synthesized answer is better when I use react chat mode.
You could try not passing the chat history in the default instance creation of engine and if the conversation is for different user and you do not want others conversation history to have any effect then you'll have to create a new instance for the user.
I may be understanding your query wrong. lol If so please tell me again π
Llama-Index is using Langchain under the hood for react mode last i checked the code for it. Maybe that's why you are not getting anything in there. I have not checked the React mode. Will have to look into that
@WhiteFang_Jr I think you may be right in that the issue lies in the langchain agent integration. Maybe the source nodes get lost somewhere along the way