Find answers from the community

Updated 2 months ago

Understanding Structured Llms With Local Llms

At a glance

The community member is experiencing runtime errors when using local language models like Llama 3.2, unlike when using OpenAI. The errors are related to "Expected at least one tool call, but got 0 tool calls" and "tooling error". The community members suggest trying to make the tool name/description more helpful, or considering using a system prompt. They also suggest modifying the input to sllm.complete() to have more instructions, or using sllm.chat() with chat messages (including a system message).

Useful resources
https://docs.llamaindex.ai/en/stable/understanding/extraction/structured_llms/ When i use Open AI it works fine when you try to use a Local LLM like mixtral or llama3.2 , i am running into runtime errors(like Exepcted at least one tool call, but got 0 tool calls and tooling error). Some one can share a example of this sample using local LLMs please ?
L
r
h
4 comments
That means that the LLM did not think it had to use a tool

Try making the tool name/description more helpful, or consider using a system prompt
@Logan M I get an error as this stmt
sllm.complete(text), Stating ValueError: Expected at least one tool call , but got 0 tool calls. I am using llama3.2 as the local LLM. Any example you have to "Try making the tool name/description more helpful, or consider using a system prompt" ? so that i can try
Well the first one just means modifying the name / docstring of the pydantic class you are trying to use

The other means using sllm.chat() with chat messages (with one being a system message)

Or you could just modify your input to .complete to have more instructions
Did you have any luck in the end @rajan_krishnan , I had similar issues too with models that not OpenAI
Add a reply
Sign up and join the conversation on Discord