Find answers from the community

Updated 5 months ago

OllamaFunctions | πŸ¦œπŸ”— Langchain

At a glance

The community members are discussing the integration of Ollama Functions with llama_index. One community member suggests that while there is no direct integration, you can use any LLM for structured outputs in llama-index by using the LLMTextCompletionProgram and PydanticOutputParser classes. They provide an example of generating an album with an artist and song list using the movie "The Shining" as inspiration.

Another community member is unsure about the Program class and how to integrate it with the Agent class. They are trying to make the Agent call a function using the Ollama LLM, and they want to redirect the thoughts and actions to the observation stack instead of having them output directly.

There is no explicitly marked answer in the comments.

Useful resources
Hi, Is there a way to use https://python.langchain.com/docs/integrations/chat/ollama_functions Ollama Functions with llama_index?
L
S
2 comments
Hmmm there is not πŸ€” But you can use any LLM for structured outputs like this in llama-index

Plain Text
from llama_index.program import LLMTextCompletionProgram
from llama_index.output_parsers import PydanticOutputParser
from pydantic import BaseModel

class Album(BaseModel):
    """Data model for an album."""

    name: str
    artist: str


prompt_template_str = """\
Generate an example album, with an artist and a list of songs. \
Using the movie {movie_name} as inspiration.\
"""
program= LLMTextCompletionProgram.from_defaults(
    output_parser=PydanticOutputParser(Album),
    prompt_template_str=prompt_template_str,
    llm=openai_llm,
    verbose=True,
)

response = program(movie_name="The Shining")
print(str(response))


Or using an external library that offers a bit more control
https://docs.llamaindex.ai/en/stable/examples/output_parsing/lmformatenforcer_pydantic_program.html
I'm not sure about Program, and how to integrate it with Agent. Currently, I'm trying to make Agent call a function using Ollama LLM. The ReAct agent in llama_index is outputing all the thoughts, actions, and answer, but I'm only interested in Answer. I would like to redirect thoughts and actions to the observation stack instead (like stdout output).
Add a reply
Sign up and join the conversation on Discord