Find answers from the community

A
Andrew
Offline, last seen 3 months ago
Joined September 25, 2024
I'm trying to troubleshoot an issue with my query pipeline. I'm trying to run the join on the output of worker_llms (llms run in parallel) but the join always seems to run right after worker_query and before the worker_llms. When I visualize the DAG it looks correct.

Plain Text
def get_judge_engine() -> QueryPipeline:
    # Setup LLMs
    judge_llm = Settings.llm
    worker_llms = {}
    num_workers = int(os.getenv("NUM_WORKERS", "5"))
    for i in range(num_workers):
        worker_llms[str(i)] = Settings.llm
    
    # Construct the query pipeline
    p = QueryPipeline(verbose=True)

    # Define the pipeline nodes.
    module_dict = {
        **worker_llms,
        "worker_query": WORKER_PROMPT_TMPL,
        "judge_query": JUDGE_PROMPT_TMPL,
        "llm_judge": judge_llm,
        "join": ArgPackComponent(),
    }
    p.add_modules(module_dict)

    # Add links between nodes 
    for i in range(num_workers):
        p.add_link("worker_query", str(i))
        p.add_link(str(i), "join", dest_key=str(i))

    p.add_link("join", "judge_query", dest_key="context_str")
    p.add_link("judge_query", "llm_judge")
   
    # Generate visualization
    net = Network(directed=True)
    net.from_nx(p.dag)
    net.save_graph("rag_dag.html")

    # Return the final pipeline.
    return p
7 comments
L
A
A
Andrew
·

Query pipeline

is there a way to use the QueryPipeline to execute different queries in parallel? I’m trying to ask a series of different queries and generate a report from the output.
3 comments
A
L
I'm getting an error when attempting to use a function tool without any parameters. I followed the "useless_tool()" example from the docs below, but receive the error ValueError: invalid method signature.
Plain Text
def useless_tool() -> int:
    """This is a uselss tool."""
    return "This is a uselss output."

useless_tool = FunctionTool.from_defaults(fn=useless_tool)

If I add any parameter the error goes away i.e. def useless_tool(dummy: int) -> int:
2 comments
A
L
Trying the new AgentRunner (0.9.16) and I'm getting very different behaviors between GPT-4 Turbo and Gemini-Pro.

llm = OpenAI(model="gpt-4-1106-preview")
agent = ReActAgent.from_tools(tools, llm=llm, verbose=True)
response = agent.chat("hi I'm Andrew")

Output:
Thought: (Implicit) I can answer without any more tools!
Response: Hello Andrew! How can I assist you today?

But if I switch to Gemini Pro it's no longer conversational (below):
llm = Gemini(model="models/gemini-pro", api_key=userdata.get('GOOGLE_API_KEY'))
agent = ReActAgent.from_tools(tools, llm=llm, verbose=True)
response = agent.chat("hi I'm Andrew")

Output:
Thought: I need to use a tool to help me answer the question.
Action: analyze_image (this is my tool)

Is there a way I can make Gemini behave in a similar conversational manner and not always default to using a tool even when I'm not asking a question?
8 comments
A
L
How can I make the ReAct Agent more conversational? If I use the OpenAIAgent, give it a system prompt and chat 'Hi' it will start to converse with me and use tools as expected. With the ReAct Agent if I chat 'hi' it tries to find a tool and then crashes. I'm using the ReAct Agent with Gemini Pro.
1 comment
A
I'm looking to use an LLM with unstructured data in an image that has hand written notes. Is it possible to extract the notes from the image and feed it into the LLM as text to then generate structured data? It feels similar to the example of working with unstructured data in a pdf to extract structured data.
2 comments
A
T
I'm using SQLTableRetrieverQueryEngine and I'm trying to figure out how to filter the results. For example, a user might ask "which records are older than 30 days" but what I really want is to only query that user's records, not all users records. Is there a way to filter the results like this in text to SQL?
3 comments
L
A
@Logan M I’m working off the Mistral Adaptive RAG example to build a multi document search. Is there a way to use the tool retriever with RouterQueryEngine to return a subset of relevant tools based off the query in a similar way to using the tool_retriever property in FunctionCallingAgentWorker?
2 comments
A
L
@Logan M is it possible/a good idea to use a QueryPipeline as a tool for a ReactAgent?
1 comment
L
This multi-modal Ollama/Llava example isn't working for me in Colab https://github.com/run-llama/llama_index/blob/main/docs/examples/multi_modal/ollama_multi_modal.ipynb I'm receiving the following connection refused error
6 comments
A
L
When using PydanticOutputParser with MultiModalLLMCompletionProgram how can I describe to the LLM what goes in each field of the output_cls? In some cases the fields are called something different in the image. For example, sometime a surgeon is referred to as "surgeon" and other times as a "provider" based on the organization.
3 comments
A
L
Is there any sort of leaderboard which shows the open source models perform best with agents?
8 comments
A
L