Find answers from the community

Updated 2 months ago

Logan M I have an Azure Deployment of

I have an Azure Deployment of ChatGPT that uses GPT4. I want to use it with OpenAI Agent, but i keep getting errors. For one, if I tried using LangChain's ChatOpenAI or AzureChatOpenAI model while specifying an engine, I get the error LLM must be of type "OpenAI". If I tried using AzureOpenAI from llama_index.LLMs - must provide engine or deployment_id parameter to create a <class 'openai.api_resources.chat_completion.ChatCompetion'. Does llama index have a ChatOpenAI? or is there anyway to use ChatOpenAI from langchain with OpenAIAgent?
L
V
11 comments
If you are using azure, follow the page here, it should work fine for gpt-4

setting the global service context solves a lot of issues with azure

https://gpt-index.readthedocs.io/en/stable/examples/customization/llms/AzureOpenAI.html
I am using the lines of code present here, yeah
Still getting those errors 😦
can you share the setup?
Unfortunately can't, since its an app for my company
But yeah I'm basically doing what that link says, and trying to do await agent.achat() and I get those errors
It just seems like even though I set the engine, set the environment variables, set the service context
It just doesn't propagate to openai when it makes the chat completion request
Llm setup code is not really confidential no? Lol

Anyways. So you've called set_global_service_context(..) ?

And you've imported from llama_index.llms import AzureOpenAI

Are you using it in a threaded application?

You setup the embeddings before the LLM?
I couldn't access discord from my work laptop and was texting from my phone
Let me see if I can get code for you
Add a reply
Sign up and join the conversation on Discord