Find answers from the community

Home
Members
richard
r
richard
Offline, last seen 3 months ago
Joined September 25, 2024
Hi folks, I'm running into an issue with using Llama Index to query from my Postgres Vector Store.

I used llama index to ingest a lot of data into a self-hosted Postgres DB with pgvector enabled (~18M rows), using OpenAI embeddings.

When I try to run queries against this DB, I get search hits maaaybe 1% of the time. The other 99% of the time I get an empty response.

Then I created a separate PGVector container with a small subset of my data (~100 rows). Then I would get the expected results almost all the time as expected.

Are these results expected? I am not too familiar with how vector searches work under the hood, and these observations puzzle me. I would love some guidance from someone more knowledgeable than me. Thanks in advance!
23 comments
r
L