hi gurus! I'm looking to try out the create-llama CLI tool but I'm not sure what it creates or how to integrate what it generates into an existing app. I already have a FastAPI backend and vector db and graph db setup and now I want to build the llama index layer to manage chunking and inserting/retrieval etc. I'm using Neo4j for graph db and have all the nodes and relationships specified and I want to use LLM and llama index to extract the nodes and properties. Can someone point to a tutorial or blog post that might present the process I should adopt? Thanks kindly
thank you for shedding light. What approach do you recommend? I have everything built to extract documents and hand them off to llama index. Right now, I'm only interested in the chunking/embedding/upserting as I'm finalizing my ETL workflow. Once I get the ingestion solid I'll turn to the frontend. I want to use a custom embedding model.
Yeah, I think so. Can you point me to a more involved demonstration or tutorial for implementing llama index that breaks it down into backend services? or at least two services, one for vectors and one for graph? This is a report app, so the traffic/scaling is not an issue. the ETL will run once a day and the reports are generated once a day or once a week