Find answers from the community

Updated 3 months ago

Vector index

It won't because the amount of documents found by the search is limited ( and so is the prompt context ) so by design the task that is being solved by the index is finding an answer within a single document in ideal scenario
L
k
6 comments
A vector index should work tbh

All your documents get broken into small chunks (1024 tokens by default)

The vector index can retrieve as many nodes as you want across all chunks (default is 2)

If you retrieve more text than fits into a single LLM call, the answer is refined across multiple LLM calls
It would be a lot more text in this case which would take quite a long time to execute across multiple calls - I was thinking more in terms of knowledge graphs and metadata added on top
tbh I think our thoughts are not aligned haha

Have you tried with a vector index yet? I would try it out first and see if it meets your needs πŸ™‚
I'll try tomorrow, I dismissed it initially because I thought It'd have to return all the documents and use the context from all of them which'd result in a huge context at the end
Nope, it's pretty optimized πŸ™‚ It only uses the most relevant context πŸ’ͺ
But if you assume if all the context is relevant? I.e. in my example, every document is talking about an application so I'd basically be asking it to count the amount of documents that mention a unique application
Add a reply
Sign up and join the conversation on Discord