Find answers from the community

Updated 2 months ago

What does the create-llama CLI tool create and how to integrate it into an existing app?

hi gurus! I'm looking to try out the create-llama CLI tool but I'm not sure what it creates or how to integrate what it generates into an existing app. I already have a FastAPI backend and vector db and graph db setup and now I want to build the llama index layer to manage chunking and inserting/retrieval etc. I'm using Neo4j for graph db and have all the nodes and relationships specified and I want to use LLM and llama index to extract the nodes and properties. Can someone point to a tutorial or blog post that might present the process I should adopt? Thanks kindly
L
t
6 comments
If you already have a backend setup, I'm not sure its the right thing to try and use
Since the backend it creates is pretty tightly coupled to the frontend it creates
You'd have to figure out how to hook your existing backend into the frontend it genreates imo
thank you for shedding light. What approach do you recommend? I have everything built to extract documents and hand them off to llama index. Right now, I'm only interested in the chunking/embedding/upserting as I'm finalizing my ETL workflow. Once I get the ingestion solid I'll turn to the frontend. I want to use a custom embedding model.
Seems like building that into a fastapi backend is the right approach right?

If you wanted to make this super scalable, you could setup like a job queue and job workers, and scale that with kubernetes

Otherwise, you can do the processing directly in a single fastapi server
Yeah, I think so. Can you point me to a more involved demonstration or tutorial for implementing llama index that breaks it down into backend services? or at least two services, one for vectors and one for graph? This is a report app, so the traffic/scaling is not an issue. the ETL will run once a day and the reports are generated once a day or once a week
Add a reply
Sign up and join the conversation on Discord