Find answers from the community

Updated 3 months ago

privateGPT/private_gpt/components/embedd...

Hi everyone

I'm trying to use PrivateGPT which is mostly based on llama_index. For now I have an issue when running embeding inference with on aws sagemaker. I'm using the BAAI/bge-large-en-v1.5 embedding which, I think, is quite common.

In PGPT, sagemaker embedding is made through Custom embedding in llama_index, the current implementation is available here

For now :
  • My model is deployed
  • PGPT is able to request the model
  • The model send back a vector
  • (I think) that this vector do not fit my vector database possibly for a wrong dimensionality
Currently I have the following issue (in attachment).

Do you guys have any idea where to look ?

For now my guess is to look for some parameters like max_length (that would reduce the dimension of my vector) but don't know if it's a good idea and where to look ... Any help appreciated πŸ™‚
Attachment
image.png
L
1 comment
It seems like a found my issue : it was related to the fact that my vector database has been created with the wrong dimension πŸ™‚

So just recreating my database with the model from aws and my issue disapeared πŸ™‚
Add a reply
Sign up and join the conversation on Discord