Hi everyone! I don't seem to find a way to separate indexing and querying using llamaindex. The landscape is the following: I have a separate indexing process that uses huggingface embeddings (so, no interaction with openai - no openai embeddings or llm calls whatsoever) to fill the vectorestore. And then there's a service that uses the index for QA that gets deployed completely separately.
But to index the documents properly (set the embedding model, set the node parser etc) it seems that I must provide the service_context which in turn requires providing an LLM. But in the indexing part I'm not using the LLM at all!
Am I missing something? (Background: I used to use langchain to do that and there was no problem separating indexing and querying, but now I'd like to switch to llamaindex cause it has some document loaders that work better for me than the langchain ones)
ok, seems to be solved kinda by itself. so, I'm using Zilliz as vectorstore. Looks like zilliz support was added just now, cause it was absent in llamaindex version that I installed a couple of days ago. I installed the latest version and it works.