Am I the only one who thinks that storing custom information a vector database (like Pinecone) and then using it to retreive some context doesn't achieve the normal level of conversation smoothness? I am basically unable to get the LLM to answer me and get me the info I am looking for (although it's in the documents).