Struggling a lot with my basic pipeline. I have a milvus db that I already stored embeddings in. I checked using the milvus client that it does in fact have data in there. I can't for the life of me however figure out how to do basic rag using llamaindex, the concept sprawl and version fragility isn't as bad as langchain but it's slowly getting there. Can't find one basic end to end example of this anywhere. Can anyone explain why this is returning an empty response?
Mini rant, but hopefully outside of my slightly disgruntled language the devs see something useful in my frictionful experience:
Is llama index going the same way as langchain now. I've been trying for multiple days now trying to implement what I would think is stupidly simple pipeline but between the discord, the documentation, GitHub, chatgpt, and just online resources have not been able to figure out.
I have endpoints, embedding and chat, served via databricks. I have a pipeline that uses a small GUI interface to allow users to input documents and add tags which them gets chunked, embedded and loaded into milvus. I would have thought it was then trivial to use llamaindex, point to the pre-loaded milvus db (because why on earth would the workflow be to have the user generate the embeddings at runtime before you can use them?!). But between the API changes, deprecations, lack of fully fleshed out community integration examples it has been a pain to do this. I've been developing with these tools since gpt 3 had its first beta API release, before langchain was at more than a few hundred stars. So I may be dumb but I'm not a noob.
I have a bad feeling that llamaindex is an all or nothing package, meaning I have to use all its custom objects, documents, nodes, and use it and only it load the db and then to use it etc. which I'm not a huge fan of. I should be able to use it where I find it strong and not have to whole kitchen sink my applications.
Hi sorry for the noob question. But I have a pipeline that populates a milvus database with embeddings and content already.
I have searched for a few hours but can't seem to find out how I can use llamaindex to use that existing db and existing embeddings as part of a rag workflow.
I am using llama index 0.11.15. I'm creating a milvusvectorstore and whenever I try to create an index from vector store and pass it my milvus object it seems to complain about an openai api key not being present. Which I don't quite understand because I already have all my embeddings pre generated. I may be lost in the sauce here. Any advice? Anyone do this before? Much appreciated, TIA 🙂