Find answers from the community

Updated 7 months ago

Hello everyone! I want to get the query

Hello everyone! I want to get the query engine to return pydantic output, and it internally uses a tool call for that. However, the tool_choice it uses is always auto which sometimes causes it to not use any tool and I get a ValueError: Expected at least one tool call, but got 0 tool calls.
What I want is to force him to always use a tool, instead of being auto so I don't get that error anymore, and it always tries to return the pydantic output I want.

Does anyone know how to do this?

I am using AzureOpenAI with a gpt3.5 model that support functions.
The code I have is:
Plain Text
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.core import Settings

Settings.embed_model = HuggingFaceEmbedding(model_name="sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2")
documents = SimpleDirectoryReader("../data").load_data()
index = VectorStoreIndex.from_documents(documents)
from llama_index.llms.azure_openai import AzureOpenAI
import os

Settings.llm = AzureOpenAI([azure settings here])

query_engine = index.as_query_engine(output_cls=ResponseModel, response_mode="compact", similarity_top_k=10)

query_engine.update_prompts({"response_synthesizer:text_qa_template": CHAT_TEXT_QA_PROMPT})

response = query_engine.query("What is the moon?")
print(response)
C
1 comment
Leaving an update here as I was able to find a solution:
When defining the LLM, in my case AzureOpenAI, I added the tool choice on additional_kwargs:
Plain Text
llm=AzureOpenAI(
    deployment_name=AZURE_OPENAI_DEPLOYMENT_NAME,
    api_key=AZURE_OPENAI_API_KEY,
    additional_kwargs={
        "tool_choice": {'type': 'function', 'function': {'name': 'ResponseModel'}},
    },
),

Where the name is the name of the output_cls you're giving to the query engine.
Add a reply
Sign up and join the conversation on Discord