Find answers from the community

Updated 3 months ago

Alright got another one for ya ll

Alright, got another one for ya'll.

I'm trying to use vllm to run a model. vllm provides an openai-compatible API, BUT i need to use a custom model name.
I can't seem to find out how.

It seems like it'll always throw the error at line 188 in openai_utils.py.

Is that right? Is there now way to put in custom model names? If not, i'm probably going to try opening a PR for that
W
B
6 comments
vllm already provides an OpenAI compatible api. I'd rather just lean on that as much of that as possible, yknow.

I got a child class of OpenAI working just now, so probably going to open a PR for that here in a bit
I don't want to have to make llama index queries on the same machine that's actually running the llm
Awesome! You can post the PR in feature-requests or contribution channel.
Add a reply
Sign up and join the conversation on Discord