Find answers from the community

Updated 2 years ago

Howdy. Let's say I've tested my index locally and it works. Not it's time to move to produ

@kapa.ai Howdy. Let's say I've tested my index locally and it works. Not it's time to move to production and host it somewhere, what do people typically do? Any pointers to documentation or blogposts would be very much appreciated.
k
L
R
6 comments
Please stand by @Rouzbeh as I search the knowledge base. We are experiencing especially high traffic right now and might take longer than usual. :wave:
----------------------
To deploy your LlamaIndex web app, you can use Docker and Docker Compose. If you're using Ubuntu or another common Linux distribution, you can follow these tutorials from DigitalOcean:

If those don't work for you, try the official Docker documentation (https://docs.docker.com/engine/install/).

For a full-stack web app example with LlamaIndex, you can refer to the Fullstack Application Guide (https://gpt-index.readthedocs.io/en/latest/guides/tutorials/fullstack_app_guide.html) and the LlamaIndex Starter Pack (https://github.com/logan-markewich/llama_index_starter_pack) repository, which includes flask_demo.py, index_server.py, a requirements.txt file, and a sample Dockerfile to help with deployment.

Remember that these resources are just examples, and you might need to adapt them to your specific use case and infrastructure.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Yea right now, llama Index fits best as an api using fastapi or flask, and deployed usually using docker
Got it - where do you recommend deploying it? Heroku and a like I mean ...
Heroku is an easy option for sure. Could also do something on AWS. Those are the only two I've actually tried myself, but there are other options out there of course
got it - thanks for the tips
Add a reply
Sign up and join the conversation on Discord