Find answers from the community

Updated 8 months ago

High context length LLM vs Low context length

if i use a model with higher context length ( 128 k ) vs a lower context length (16k) will it help when i do RAG with higher context length?

does larger context length model have larger chunk size? or node size?

can someone please explain, thank you. Or link me to any article that talks about this
Y
T
5 comments
explanation is needed, thank you
did some research, i can configure the similarity Top K to retrieve more chunks
lets say if i use claude 3 models, which means it has much higher context length. Does using high similarity Top K always better than using lower? or is there scenario i want to use lower similarity top k
If you retrieve too much data it will be harder for the LLM to synthesize an accurate response
Also the latency and cost will be higher
Add a reply
Sign up and join the conversation on Discord