VectorStoreIndex
, how could we specify the llm and embedding models to point to AzureOpenAI
instead of OpenAI
where the index was fetched via VectorStoreIndex.from_vector_store(vector_store=vector_store)
on a separate server and used as a query engine tool?Select
method to query a subset of the table? Challenge is that the db guys want to make just 1 table for different knowledge bases, and use Views to differentiate them. so my app needs make further query after loading the the PGVectorStore
ojbect