Find answers from the community

Home
Members
discorthur
d
discorthur
Offline, last seen 3 months ago
Joined September 25, 2024
for vector stores on supabase created through VectorStoreIndex, how could we specify the llm and embedding models to point to AzureOpenAI instead of OpenAI where the index was fetched via VectorStoreIndex.from_vector_store(vector_store=vector_store) on a separate server and used as a query engine tool?

Besides, the vecs table right now has around 500 nodes, each node with 500 characters, top_k is set as 12, what we observed is that there were 12 calls to the OpenAI Embedding model that spanned over 6 seconds. Is that normal? And why was the embedding model called 12 times for the same query text when it comes to similarity calculation?
23 comments
G
L
d
@Logan M Under PGVector can we use the Select method to query a subset of the table? Challenge is that the db guys want to make just 1 table for different knowledge bases, and use Views to differentiate them. so my app needs make further query after loading the the PGVectorStore ojbect

https://gpt-index.readthedocs.io/en/stable/api_reference/storage/vector_store.html#llama_index.vector_stores.PGVectorStore.Select
1 comment
L