Is there an option to recreate the MultiModalVectorStoreIndex once the collections are in a Qdrant Cluster? from_vector_store does not work as Qdrant only takes 1 collection. Which approach is doable The only solution I see is creating a local client, but I want to do it from a cluster
Which options would you choose from LlamaIndex to generate synthetic Exam question? I want the model first to check sample questions and then go to the knowledge base to generate similar ones.
I am not yet familiar with the new workflows, so would this work better or would it be a better option to use RouterQueryEngine with Agents?
@kapa.ai Could anyone point out which llamaindex library can extract text from a pdf in the fastest possible way? Lets say the pdf has also images but i just need the text in the fastest possible way. Any hint or any better alternatives?