Hey @Logan M
I created custom nodes and tried storing into Qdrant Index but i get this below error
File "/home/tharakn/llama-index-qdrant/app/routers/parser.py", line 129, in json_parsing
index = VectorStoreIndex.from_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tharakn/llama-index-qdrant/llama_index_qdrant/lib/python3.12/site-packages/llama_index/core/indices/base.py", line 110, in from_documents
docstore.set_document_hash(doc.get_doc_id(), doc.hash)
^^^^^^^^^^^^^^
AttributeError: 'list' object has no attribute 'get_doc_id'
This is the below code for creating CustomNode
async def _generate_custom_nodes(json_data_list):
with multiprocessing.Pool(processes=6) as pool:
results = pool.map(_process_node, json_data_list)
return results
def _process_node(json_data):
custom_node = TextNode(
text="",
metadata = _get_node_dict(json_data)
)
return custom_node
Code for running Ingestion pipeline and storing genereated CustomDocuments into Qdrant
documents = await _generate_custom_nodes(json_data)
pipeline_tasks = []
batch_size = 5
for i in range(0, len(documents), batch_size):
batch_documents = documents[i:i+batch_size]
task = generate_pipeline_tasks(batch_documents=batch_documents, llm=llm, qdrant_client=qdrant_client)
pipeline_tasks.append(task)
results = await asyncio.gather(*pipeline_tasks)
qdrant_vector_store = QdrantVectorStorage(
qdrant_client=qdrant_client,
collection_name="llama_index_searchx"
)
storage_index = StorageContext.from_defaults(vector_store=qdrant_vector_store)
index = VectorStoreIndex.from_documents(
documents=results,
storage_context=storage_index
)
Can you help me in this issue, i tried using TextNode and Document schemas (both) for custom documents generation before passing to Ingestion pipeline, but still facing same issue