Find answers from the community

s
F
Y
a
P
Updated last month

Langchain integration

Hello everyone, I am trying to integrate Llama index with Langchain. What is the best way to do that? I have seen an implementation in which llamaindex is used as a tool in Langchain agent.
tools = [
Tool(
name = "GPT Index",
func=lambda q: str(index.query(q)),
description="useful for when you want to answer questions about the author. The input to this tool should be a complete english sentence.",
return_direct=True
),
]


can I use that even if my index is a composable graph?
L
M
11 comments
Yup! That's the easiest way to do it. A graph should still work too πŸ‘
great thanks. Should I worry about memory tho?
I think memory should be fine? Unless your saved graph index.json is very very large
Oh, I see. But I mean token size. meaning the max token the llm will allow is limited. In nomral Llama index, we only give it the node and the query at a time, I assume. But, in integrating with Langchain, we also give the model the history. Does that not exceed the limit token?
Langchain has different types of memory classes to help manage the memory limit πŸ‘
yes, understandable. Thanks.
I have implemented the ConversationBufferMemory
agent_chain = initialize_agent(tools, llm, agent="conversational-react-description", memory=memory).
when I have a long history, I get no response and no error. I suspect this is because the tokens are too much. But, why am I not getting any answers?
I'm not sure, but you are right, it's probably related to the conversation length. Maybe try using a different memory class. I know they have others that summarize the conversation as they go to ensure the history is not too large
Yes they have summurizer memories, but I was wandering why there was no error from langchain side.
I'm not sure about that πŸ€” but I agree, I would expect an error.

I'm not a complete expert with langchain sadly lol
I know. thank you tho.
Add a reply
Sign up and join the conversation on Discord