Find answers from the community

Home
Members
drButts
d
drButts
Offline, last seen 3 months ago
Joined September 25, 2024
Hello, I have created an application very similar to the uber 10k tutorial, and am moving to qdrant as the vector store and would like to do my data ingestion in a separate 1 time process, and then run my app off of the embeddings and llama_index index set and graph_index that I have created. I am looking for some direction on how to "reconstitute" these llama_index tools when i am not creating the embeddings within the same application as my chatbot. Most or all examples I have found create the embeddings in the same process as the chatbot/lanchain tools etc
16 comments
L
d
QdrantReader pydantic error:
I am ingesting documents into qdrant using this code:
Plain Text
 qdrant_client = GetQdrantClient().get_client()
            logger.debug("creating reader")
            reader = SimpleDirectoryReader(
                input_files=[f"./temp/{user_id}/{file_name}"]
            )
            documents = reader.load_data()
            qdrant_vector_store = QdrantVectorStore(
                client=qdrant_client, collection_name=COLLECTION_NAME
            )
            pipeline = IngestionPipeline(
                transformations=[AZURE_EMBEDDING], vector_store=qdrant_vector_store
            )
            for document in documents:
                document.metadata["user_id"] = user_id
                document.metadata["file_name"] = file_name
            nodes = pipeline.run(documents=documents, num_workers=4)
            return
     
12 comments
L
d