Find answers from the community

Updated 3 months ago

How will I do that for llama2, will I

How will I do that for llama2, will I have to first host my llama2 model somewhere, are there no services out there that already hosts open source models, so I can just use their API
a
B
8 comments
hey no silly question here:

you can try using huggingface or replicate for open source llms
you'd need to get an api key in order to use their services, but after you have that, you can use our LLM integrations for either of the two

What does replicate do exactly?
Is it a cloud compute service for LLMs?
Thanks for replying btw
Yup, they're an inference service for LLMs https://replicate.com/
they even offer fine-tuning services too
so instead of hitting openai's API for example to query an LLM, you can use replicate in similar ways to query an open-sourced LLM of your choosing (by providing a model name to any of the models they support)
Add a reply
Sign up and join the conversation on Discord