Find answers from the community

s
F
Y
a
P
Updated 6 months ago

Using Ollama - Instructor

Is it possible to get PydanticProgram working with Ollama llama3? With instructor its super easy to do https://python.useinstructor.com/hub/ollama/#patching/. I have a loading pipeline with uses pydanticextractor. I want to reduce cost by moving to local models.
L
A
12 comments
Pretty easy to do

Plain Text
from llama_index.llms.ollama import Ollama
from llama_index.core.prompts import PromptTemplate
from pydantic.v1 import BaseModel, Field

class MyClass(BaseModel):
  """Some description."""
  name: str = Field(description="Some description")

llm = Ollama(..., json_mode=True)

prompt = PromptTemplate("Give me a name based on {topic}")
output = llm.structured_predict(MyClass, prompt, topic="movies")
print(output.name)

# or async
output = await llm.astructured_predict(MyClass, prompt, topic="movies")
Hey @Nehil, I am trying to implement the same with Ollama but with tool calling. But I got the following error

Cell In[32], line 6
2 from llama_index.core.agent import FunctionCallingAgentWorker
3 from llama_index.core.agent import AgentRunner
----> 6 agent_worker = FunctionCallingAgentWorker.from_tools(
7 initial_tools,
8 llm = llm,
9 verbose = True
10 )
13 agent = AgentRunner(agent_worker)

File ~/miniconda3/envs/DL/lib/python3.10/site-packages/llama_index/core/agent/function_calling/step.py:125, in FunctionCallingAgentWorker.from_tools(cls, tools, tool_retriever, llm, verbose, max_function_calls, callback_manager, system_prompt, prefix_messages, kwargs) 121 prefix_messages = [ChatMessage(content=system_prompt, role="system")] 123 prefix_messages = prefix_messages or []--> 125 return cls( 126 tools=tools, 127 tool_retriever=tool_retriever, 128 llm=llm, 129 prefix_messages=prefix_messages, 130 verbose=verbose, 131 max_function_calls=max_function_calls, 132 callback_manager=callback_manager, 133 kwargs,
134 )
...
71 )
72 self._llm = llm
73 self._verbose = verbose

ValueError: Model name mistral does not support function calling API.

Could you please help me?
@AashiDutt Ollama doesn't have a tool calling API. You'll have to use a ReActAgentWorker or something else
Could you provide some link or reference for this?
Its the exact same, just a different import

Plain Text
from llama_index.core.agent import ReActAgentWorker

agent_worker = ReActAgentWorker.from_tools(initial_tools, llm=llm, verbose=True)
Got it. Thank You πŸ™‚
You might find it helpful with ollama to use Ollama(,,,, json_mode=True), or alternatively setting Ollama(..., additional_kwargs={"stop": ["Observation:"]}) to help with output parsing
I did use json_mode = True, but I'm encountering ReadTimeout: timed out.
You can increase the timeout
Ollama(... request_timeout=3600)
Should be enough lol
It worked ! 🀩
Add a reply
Sign up and join the conversation on Discord