Find answers from the community

Updated 2 months ago

Query Engine / Index Connection Errors

Having trouble actually running any queries on my query engine..

I get various errors like a 404 error or APIConnectionError depending on how I query the query engine (or when its wrapped over by a Context Augmented Agent). I've attached my code here in a text file because I don't think it'll fit. (Traceback is within the code as well)

PLEASE TAKE NOTE OF MY COMMENTS IF YOU READ MY CODE
YOU WILL ALSO NEED TO DOWNLOAD THE FILE, COMPANY SECURITY IS DOING WEIRD STUFF TO THE PREVIEW

Agent Error: Exception: APIConnectionError: Connection error.

I went ahead and tested my connection info to my AzureOpenAI class/wrapper via LangChain and it works fine on its own when in a simple notebook I create the object and prompt it. But when its wrapped in an index/engine it starts to have connection issues as shown in my code/traceback.
n
L
d
20 comments
Apologies for the pester - I think this is getting burried. @Logan M any ideas?
I think you need to pass the llm into the ContextRetrieverOpenAIAgent constructor
I see. That wasn't in this guide.
https://docs.llamaindex.ai/en/stable/examples/agent/openai_agent_context_retrieval.html
So would I add an extra param to this example:

Plain Text
context_agent = ContextRetrieverOpenAIAgent.from_tools_and_retriever(
    query_engine_tools,
    context_index.as_retriever(similarity_top_k=1),
    verbose=True,
)

And in this extra example I pass a LLM - I assume for the agent itself not just the tools?
The demo assumes openai I guess lpl
Thank you good sir. Trying that now
Alright, had a lot more issues after this but you were right. I did need to add a LLM kwarg. Except there were two extra issues I encountered (and fixed)
  • SSL Errors (I had to go download an OpenAI SSL Cert and add it to my local python venv)
  • This type of agent seems to only accept OpenAI models not AzureOpenAI models (which is what we're using). I do have access to a OpenAI api key so I went with that, but was surprised to see it baked into the source code that it only accepted OpenAI models. (See below)
Line 118: (from the source you linked above)
Plain Text
if not isinstance(llm, OpenAI):
  raise ValueError("llm must be a OpenAI instance")
I'll post about this SSL cert thing later in general, I suspect others will encounter it and it was (surprisingly) easy to fix.
Its working now though lol
I found this thread looking for another case of isinstance(llm, OpenAI), this time in rags/core/agent_builder/utils.py. The result is that, when I was trying to use this very convenient RAGs app with a local LLM, via Ollama in this case, it started using the ReActAgent protocol instead of the function protocol that OpenAI supports. But, but, ... other LLMs support the function protocol too.

Why is it that, very often, I stumble over yet another openai-dependency? This is not a problem with LlamaIndex specifically. Almost everything ai related seems to suffer the same bias. General request: let's try to include high level comments and constants regarding such dependencies in each file and project. Then we will have an easier time figuring out how to generalize things to use local LLMs.
AzureOpenAI is a subclass of OpenAI, so it should be fine (assuming you are using LlamaIndex LLM classes)

Plain Text
>>> from llama_index.llms import OpenAI, AzureOpenAI
>>> openai = OpenAI()
>>> azure = AzureOpenAI(engine="fake")
>>> isinstance(openai, OpenAI)
True
>>> isinstance(azure, OpenAI)
True
Which LLMs support the same function protocol as OpenAI? Happy to help make it work in the framework
I have only, so far, modified the builder_config.py, as suggested by the instructions on https://github.com/run-llama/rags. But a lot more will need to be configured elsewhere, it seems.
The reason for common openai dependencies is that usually using open-source LLMs means a lot of the more interesting use-cases aren't really possible. Open-source LLMs are no where close to being as powerful, despite what influencers might try to tell you. I've yet to try an open-source LLM that was reliable as an agent :PSadge:
The limitations of most open-source LLMs is clear. But that is changing rapidly. Many are very good at certain aspects of the overall problem, while not being as all-encompassing as openai models. A good alternative strategy is to use several smaller models that are each very good at a particular skill, similar to what Mixtral does, but not wrapped in a single LLM.

Projects that implicitly depend on openai should be more explicit about that dependency, and to the extent they try to be more general, that's great, but please call out the known limitations, limited testing, etc.
https://www.reddit.com/r/LocalLLaMA/comments/16bik9d/best_open_source_model_for_function_calling/ Not sure if they support the "same" function protocol as OpenAI, but there could be multiple ways of doing it. I expect we will see standards come out regarding various protocols for talking with LLMs.
Yea until people settle on common API for this, react is the best alternative at the moment. Although structured react would probably be an improvement (using stuff like guidance or llm-enforcer), and it's something on my list

I have a feeling eventually every one will copy openais function calling API into servers like vLLM and TGI, and then everything will be easy after that πŸ™‚
I expect we will see a lot of rapid evolution in all areas. Regarding function calling (which wasn't even my reason for ending up here - rather, it was about RAGs, which internally require function calling, I guess), here is one promising alternative: https://langroid.github.io/langroid/quick-start/chat-agent-tool/
I'll have to try it again but everytime I was using AzureOpenAI it failed, but OpenAI was fine.
Add a reply
Sign up and join the conversation on Discord