Find answers from the community

Updated last year

Hi back @Logan M , I've just tried to

Hi back @Logan M , I've just tried to implement the CondensePlusContextChatEngine, but, when using summary index retriever with a large amount of documents, it shows me a token limit error
L
R
10 comments
Oh yea... its not ideal to use with a summary index πŸ˜… Since it will put all retrieved nodes into a single LLM call (and a summary index retrieves ALL nodes 😬 )
Oh yeah, I was thinking it did a summary
then again another, etc
or something like that
but using all nodes (and async for some when enabled)
is it possible to work like this, else, according to my previous request, what would be the best ?
To change the question just by ramaking it without taking Care of the context/with only the documents maybe in context, but not the chat, or without it influencing to much the query
like it's very hard remakes of the queries sometime with condensequestion, maybe if I could change the prompt it would be better
also, what is the best practice to just generate a chat engine which we ask to choose between multiple query engines, and have a chat memory, but does not remake the sentence of the queries
Add a reply
Sign up and join the conversation on Discord