I have a question. When I'm building the index, all fine. Docs explain everything really well. In case, I want to build a graph based index of many types of source documents. Let's say,
- SQL DBs
- Speadsheets
- PDFs
- Webpages
In my case, I'm ingesting them in one go using llama_index. Loading documents -> creating index -> making a query engine on top -> querying it. That's easy so far.
However, when during inference, when I'm not building the index, when I just want to directly connect to Pinecone vector db and query it, there's no straightforward way. Everywhere it mentions to use load_data from document, but what if I don't want nodes from local storage, but from remote index. I think I'm missing the key conceptual understanding of llama_index. How do I make it work?