Find answers from the community

Updated 9 months ago

Openai

At a glance
@Logan M/ @ravitheja / @Jerry Liu Our Org has wrapped the Azure Open AI and makes the completion and embedding capabilities available via REST endpoints and keys. Is there a way to integrate that with LlamaIndex so that it refers to this as the LLM and Embedded model.
L
A
6 comments
I saw the docs on custom llm and embedded models but could not find an example where REST API endpoints and tokens are used to call it
Thats kind of beyond the scope of llama-index and the docs πŸ˜… You would just user requests.get(...) or similar to ping your REST API and get responses
This document shows that can even be a wrapper around an API , so is there a direction to implement this ?

URL : https://docs.llamaindex.ai/en/stable/module_guides/models/llms/usage_custom/#example-using-a-custom-llm-model-advanced
Attachment
image.png
Yea, its saying that in def complete(...) you can implement things like calling a rest API
This is more general python api stuff. I'm guessing you know how to call your api, so you just need to call it in that function
Add a reply
Sign up and join the conversation on Discord