Find answers from the community

Updated 4 months ago

Hey guys, what do you use to host your

Hey guys, what do you use to host your llamaindex apps/API? AWS?
W
G
L
14 comments
Did you face any issue hosting or this is kind of a survey πŸ˜…
Not a survey πŸ˜… . I was going to go with AWS. But it seemed very costly. So I thought I'll ask what everyone is using
Ended up going for "g4dn.xlarge"
Hosting open-source llm?
just a RAG application
using llamaindex
an API basically (fastAPI) that spins up llamaindex for an endpoint
I see, Since you have purchased such a big machine i thought you are hosting your llm as well
Haven't selected it yet. but g4dn.xlarge seemed to be the cheapest solution
Yeah, if you are working with a poc only and open to experiment with openai as well then dont buy this machine yet.

Also if you want to use open-source llm , you can test out the llm capabilities on google colab
once you finalize then you can opt for the machine
Yeah I have been using external apis for running the language model.

I am using Weaviate for storing indexes. On a cpu instance, it seems to take a lot of time. As per their docs, gpu helps. My personal laptop has one, which is why I never ran into this problem before
Add a reply
Sign up and join the conversation on Discord