Find answers from the community

m
mat
Offline, last seen 2 months ago
Joined September 25, 2024
Hello, I have a quick question: on llama-index 0.4.14, does GPTSimpleVectorIndex call the embedding asynchronously by default?
8 comments
j
L
y
m
Hi folks, a vector store question: do vector stores on GPT index (e.g. GPTSimpleVectorIndex) support having separate embedding functions for docs and queries?

e.g. on https://learn.microsoft.com/en-us/azure/cognitive-services/openai/concepts/models#text-search-embedding, there are different endpoints for embedding a query vs a document

Is it as simple as implementing the langchain.Embeddings (https://github.com/hwchase17/langchain/blob/9833fcfe32eab8b419a6624f02c2536ac4115ed3/langchain/embeddings/base.py) interface?
8 comments
y
m
j
Possibly a noob python question: any tips on sharing a gpt index object (not too concerned about the type of index) between python processes?

For context, I'm running a flask application, and I'd like to have multiple workers to serve concurrent requests. I'd like them all to access the same instances of gpt indices that i end up generating, rather than building them multiple times.
6 comments
y
m
j