Hi, subquestion query engine is resulting in API timeout error, is there any way to increase the timeout as llm is taking very long time to generate response?
Hi, I have been using llamaindex for long time now. My code was working well and after updating llamaindex and openai, the same code have stopped working. I have fixed most of it by updating the code. But now when I am using vector store index as chat engine or query engine then I am getting unsupported protocol error : request url is missing. It works fine if I use vector store index as retriever. Can anyone help me fix this as I need it urgently. Thanks
While using llamaindex subquestionqueryengine with guidance question generator, i am getting this error. I am following the demo showed over the llamaindex blog exactly but getting this error everytime.
When i am using gpt 4 in subquery engine, it is not generation sub queries for some of the cases while gpt 3.5 turbo is able to do that for same query. Why is this happening?
How can I move my index created from documents which is stored in memory to vector database without reprocessing the document? I am reading the index using storage context and trying to write it to elastic database but it's not working.
I am getting this error while using query engine over sql database for some of the queries. How can i resolve this. For defining schema I have used SQLTable schema from llamaindex objects module
Subquestion query engine doesn't work with azure openai but works with open ai. I have raised this issue earlier here and also the bug on github issue where I received automated response to use 2023-07-01 preview api which I have used from start and still facing the issue. Can someone help me to actually completely resolve this issue as I am not expert to go and update the llamaindex code myself. Please help me in resolving this as this will be very helpful for my urgent work. Thanks
I am using sub question query engine with azure open ai and getting this error "Unrecognised request argument supplied:functions". Since I am using the same code that is shared by @jerryjliu0 for comparison between 10k document of Uber and lyft. The only change is that I am using azure open ai instead of open ai. Can this be leading to some bug. If not please help me with fix as I have been trying this for half of day now.