Can you share the example you are trying. So that I can check
@WhiteFang_Jr let me know if you have looked into this
Yes checking, will update shortly
Thank you so much, I am awaiting your response
@WhiteFang_Jr just to let you know, the engines i have created for Uber and lyft are working fine Individually. I am getting issue only with sub question query engine
Any idea what can be the issue?
I believe it's coming from azure openai end
But not sure is it due to some version mismatch or what
Looking into it, Hoping I find the cause π
Which GPT version are you using?
Probably it's trying to use function calling, but your deployed version doesn't have it yet
I can update my version to 0613 which is the only one available apart from 0301
Can be a good try, the subquestionengine code was made to deal with this occasion, but sounds that's not working
did you setup the service_context?
I'm not seeing on your code
It could be the reason, Just found this in the llm call:
Yes, but if function calling failed, it should fallback, so there's an issue on this... but upgrading your api version to one that supports function calling should work for now
Even the default prompt is mentioning to use function
Is there any temporary fix that I can use for now?
I have tried both model version 0613 and 0301
Of gpt 35 turbo, getting same error in both
SubQuestion is using OpenAIPydantic under the hood
@WhiteFang_Jr any fix for now?
Fast solution would be to use lower version of llamaindex
Do you know which version should fix this or I have to go with trial and error
Try with something lower than 0.8 see if it works
@WhiteFang_Jr tried at least 10 versions, nothing working out
@WhiteFang_Jr , should i raise this as a bug or you have already raised?
Hey, yes please raise this as issue
@Shubham1696 Are you sure that your deployed version is a 0613 one?
I'll setup an azure account and do my own tests here
I tried earlier with 0613 but reverted back, let me try again because I have made lot of changes. If it doesn't workout then you can setup. Thank you for responding π
nw, but if I understand correctly, you need to have a deployed version on azure with 0613, and then send this deployment id on your code
so if you have another deployed version, but try to use 0613, probably will not work
but I'm not 100% sure since don't have access to azure yet
I am using AzureOpenAI module instead of Open Ai from llama_index.llms. This is fine right?
No i have only one and I have been changing it's version
Actually this one is just using llm from azure which is working fine for me. I am even able to query over index using azure openai llm but as soon as I hit subquestion query engine, it fails with that error
are you running through a notebook right?
I fixed the above issue and now it's generating sub queries
I think we are near to fix π₯Ή
maybe can try to remove the await if not using async
I imported nest_asyncio and applied it fix previous error
try to do
response = query_engine.query(question)
Actually i have done sams
can you share the entire code? then I can understand better what's being used
copy the entire traceback (error) and paste here pls, i'm not being able to see it fully
I have just fixed the issue @Emanuel Ferreira , I created query engine out of query, that was missing
Thank you so much for accelerating and getting this fixedπππ