The community member is experiencing runtime errors when using local language models like Llama 3.2, unlike when using OpenAI. The errors are related to "Expected at least one tool call, but got 0 tool calls" and "tooling error". The community members suggest trying to make the tool name/description more helpful, or considering using a system prompt. They also suggest modifying the input to sllm.complete() to have more instructions, or using sllm.chat() with chat messages (including a system message).
https://docs.llamaindex.ai/en/stable/understanding/extraction/structured_llms/ When i use Open AI it works fine when you try to use a Local LLM like mixtral or llama3.2 , i am running into runtime errors(like Exepcted at least one tool call, but got 0 tool calls and tooling error). Some one can share a example of this sample using local LLMs please ?
@Logan M I get an error as this stmt sllm.complete(text), Stating ValueError: Expected at least one tool call , but got 0 tool calls. I am using llama3.2 as the local LLM. Any example you have to "Try making the tool name/description more helpful, or consider using a system prompt" ? so that i can try