Hello everyone, I am trying to integrate Llama index with Langchain. What is the best way to do that? I have seen an implementation in which llamaindex is used as a tool in Langchain agent. tools = [ Tool( name = "GPT Index", func=lambda q: str(index.query(q)), description="useful for when you want to answer questions about the author. The input to this tool should be a complete english sentence.", return_direct=True ), ]
can I use that even if my index is a composable graph?
Oh, I see. But I mean token size. meaning the max token the llm will allow is limited. In nomral Llama index, we only give it the node and the query at a time, I assume. But, in integrating with Langchain, we also give the model the history. Does that not exceed the limit token?
I have implemented the ConversationBufferMemory agent_chain = initialize_agent(tools, llm, agent="conversational-react-description", memory=memory). when I have a long history, I get no response and no error. I suspect this is because the tokens are too much. But, why am I not getting any answers?
I'm not sure, but you are right, it's probably related to the conversation length. Maybe try using a different memory class. I know they have others that summarize the conversation as they go to ensure the history is not too large