Find answers from the community

Updated 3 months ago

Hey!

Hey!
Today a client told me that with the chatGPT's they have "build" they are having worst results than months ago.
Thinking about it, having ChatGPT4 since then, I'm guessing that when they build the GPT, the text-embedding-ada-002 and now they upload files to compare against this GPT. Could it be that the text-embedding-3-large with persent file doesn't talk properly with VectorDB already embedded in the past?
I've told them to have the files deleted and uploaded again. Guessing that it will be better.
Of course I've told them that with solution like Llamaindex choosing the embedding and having the same model being use, this it wouldn't happen.
Any thoughts on this?
T
m
2 comments
Yeah the GPTs have some glaring issues when it comes to production use-cases. You don't basically have any control over the context window and RAG setup.

If you build it in Llamaindex it wouldn't happen since you can manually set the embeddings model including the parameters.

Also if there was an actual mismatch between the embeddings model and the previously created ones, it should throw an error
Add a reply
Sign up and join the conversation on Discord