Find answers from the community

Updated 4 months ago

Hi, I have a question using Llama Index

At a glance

The community member has a question about using Llama Index with Guidance AI, specifically for Ollama. They are wondering if it's possible to use a Langchain model, as Langchain has Ollama model support, to bridge the gap since Guidance currently only supports OpenAI, llama.cpp, or transformer models.

In the comments, another community member suggests that you can use Langchain LLMs by adding them to the ServiceContext, but the original poster is unsure if they can simply replace the guidance_llm=OpenAI() argument with a Langchain LLM. The community members discuss that Guidance may only support LLMs from the Guidance package, and they suggest using the LMFormatEnforcerPydanticProgram as an alternative to access more LLMs, including Langchain models.

The community members also discuss the possibility of integrating the PydanticProgram with the Agent, but it's noted that this is not easily done at the moment, though it's a good idea that is being considered for the roadmap.

Useful resources
Hi, I have a question using Llama Index with Guidance AI but for Ollama. Currently I think guidance only support OpenAI, llama.cpp, or transformer. Is it possible to use langchain model as well. Langchain has OllamaModel support so it will bridge the gap?
L
S
13 comments
yea you can use langchain LLMs, just throw it into the service context and it should figure it out

ServiceContext.from_defaults(llm=langchain_llm)
Hi @Logan M , thanks a lot for your reply. Do you mean in this example, I can just replace guidance_llm=OpenAI() with langchain llm?
https://docs.llamaindex.ai/en/stable/examples/output_parsing/guidance_pydantic_program.html
Plain Text
program = GuidancePydanticProgram(
    output_cls=Album,
    prompt_template_str=(
        "Generate an example album, with an artist and a list of songs. Using"
        " the movie {{movie_name}} as inspiration"
    ),
    guidance_llm=OpenAI("text-davinci-003"),
    verbose=True,
)
Or should I set ServiceContext.from_defaults(llm=langchain_llm) and remove the guidance_llm arg?
ohhhh no. I think you can only put LLMs from the guidance package here
yeah, I think so too. Too bad
You can use llm-format-enforcer instead of guidance to get access to more LLMs though
https://docs.llamaindex.ai/en/stable/examples/output_parsing/lmformatenforcer_pydantic_program.html#lm-format-enforcer-pydantic-program

Plain Text
from llama_index.llms import LangChainLLM

llm = LangChainLLM(lc_llm)


program = LMFormatEnforcerPydanticProgram(
    output_cls=Album,
    prompt_template_str=(
        "Your response should be according to the following json schema: \n"
        "{json_schema}\n"
        "Generate an example album, with an artist and a list of songs. Using"
        " the movie {movie_name} as inspiration. "
    ),
    llm=llm,
    verbose=True,
)
(note: Have not tried this package yet lol but seems basically the same)
oh, nice. thank you very much. I will give it a try.
@Logan M Is there a way to integrate PydanticProgram to Agent? I would like the ReAct agent to use this LMFormatEnforcer.
Hmm... Not easily right now πŸ€”
But, a good idea. Agents + Structured Outputs going onto the roadmap
@Logan M yeah I think Agent can use structured outputs to decide the next step or calling the function/tools more precisely. My hope is that can produce a more consistent behavior.
Add a reply
Sign up and join the conversation on Discord