Find answers from the community

Updated 3 months ago

Hi, I got stuck in 2 areas.

Hi, I got stuck in 2 areas.
  1. Getting value error: Req tokens(4323) exceed context window of 3900
when I use summary_index as query engine.


  1. Getting correct output along with unwanted text or non relevancy text in & as response.
What can I do to get only the context- response relevance when I query?
L
a
A
6 comments
what LLM are you using? Seems like you probably changed a setting that caused that first error.

Second issue is just prompt engineering πŸ€·β€β™‚οΈ
What type of prompt would help here
For RAG if the context chunk contains similar looking content in multiple places in the context, then how can LLM distinguish between them
Can you be more specific on how I can keep prompt for better context answer
Hi, I'm using mistral for LLM. While checking with summary index functionality I'm facing tokens reltd issue. While using vector index I'm not facing any problem.
@Logan M Also I'm looking for pdf tables extraction for robust RAG implementation.
Pls share ur thoughts.
Add a reply
Sign up and join the conversation on Discord