Find answers from the community

Updated 2 months ago

Indexing

Am I the only one who thinks that storing custom information a vector database (like Pinecone) and then using it to retreive some context doesn't achieve the normal level of conversation smoothness? I am basically unable to get the LLM to answer me and get me the info I am looking for (although it's in the documents).
L
1 comment
There's a lot of knobs to tweak to make retrieval better

Top k, adjusting chunk size, writing a custom retriever
Add a reply
Sign up and join the conversation on Discord