Find answers from the community

Updated 4 months ago

Is their a simple implementation of that

At a glance
Is their a simple implementation of that in isolation? This codebase is quite messy for head as a starting point. I would like to understand the basic concept
g
A
11 comments
whats your use case
right now the use case is just replicate the it
so you want to make a evctor store for local files?
exactly and see how to do it
and then pass it to OpenAI API
maybe a simple http request with payload
i'd say dont rely on llamaindex. Do this-
1) Make a chromadb (preferred ) or qdrant vector collection locally.
2) Use any embeddings and chunking size based on that to make the vector records for db.
3) load the vector in app with something like this-
Plain Text
def get_vector_store(client):
    vector_store = QdrantVectorStore(
    client=client, collection_name=""
    )
    return vector_store
storage_context = StorageContext.from_defaults(vector_store=vector_store)
embed_model = # your embedding model used to create vector db
    index = 
VectorStoreIndex.from_vector_store(storage_context=storage_context, embed_model=embed_model)

you can further query the vector store with chat_engine using OpenAI. Otherwise I'd recommend using standalone code for retrieving documents and passing it OAI api
dm me if you have questions
Add a reply
Sign up and join the conversation on Discord