Find answers from the community

Updated 3 months ago

Hello guys I have a big question again

Hello guys I have a big question again xD I load the base model and after that, i always generate a new index with the documents when the user asks for something. Does it work how I think and it's always a reset of the model and index? It seems to confuse itself as longer as the conversation goes on so im not sure xD I save the chatlogs inside the document folder so the bot knows what was already asked from this user πŸ˜„
L
s
9 comments
What's the full use case here? How are you using llama index?
Its just a chat implementation with the opportunity to upload files. On every Request i regenerate the idnex like that: index = GPTSimpleVectorIndex.from_documents(
documents, service_context=service_context
)
In the documents folder, there is a file with the chat log and the context tells the model that this was the conversation that both had previously. PS you are a great and helpful person πŸ˜„
Ahh I see. Have you looked into using langchain at all? If you need to remember previous parts of the conversation, they will handle this a little better. Meanwhile with langchain, you can use llama index as a custom "tool" that the agent can use. So if you have a bunch of documents you want the chatbot to have access to, llama index can help provide that
Ok i see i will try to implement this then. thanks now i understand it a little better again πŸ˜„ i already use Langchain but not that way xD
haha sounds good!
The problem will be that I use a c# .net core frontend and I only call Python for getting answers. So the in memory cache of the Chat log is not that useable for me. as it should hold multiple conversations with different ids xD I will first I read everything and then i will check what i can do with the additional information from you πŸ˜„ Thank you πŸ˜„
I think the memory modules from langchain are all serializable!
Add a reply
Sign up and join the conversation on Discord