Find answers from the community

Updated 4 months ago

OllamaFunctions | πŸ¦œπŸ”— Langchain

At a glance
Hi, Is there a way to use https://python.langchain.com/docs/integrations/chat/ollama_functions Ollama Functions with llama_index?
L
S
2 comments
Hmmm there is not πŸ€” But you can use any LLM for structured outputs like this in llama-index

Plain Text
from llama_index.program import LLMTextCompletionProgram
from llama_index.output_parsers import PydanticOutputParser
from pydantic import BaseModel

class Album(BaseModel):
    """Data model for an album."""

    name: str
    artist: str


prompt_template_str = """\
Generate an example album, with an artist and a list of songs. \
Using the movie {movie_name} as inspiration.\
"""
program= LLMTextCompletionProgram.from_defaults(
    output_parser=PydanticOutputParser(Album),
    prompt_template_str=prompt_template_str,
    llm=openai_llm,
    verbose=True,
)

response = program(movie_name="The Shining")
print(str(response))


Or using an external library that offers a bit more control
https://docs.llamaindex.ai/en/stable/examples/output_parsing/lmformatenforcer_pydantic_program.html
I'm not sure about Program, and how to integrate it with Agent. Currently, I'm trying to make Agent call a function using Ollama LLM. The ReAct agent in llama_index is outputing all the thoughts, actions, and answer, but I'm only interested in Answer. I would like to redirect thoughts and actions to the observation stack instead (like stdout output).
Add a reply
Sign up and join the conversation on Discord