Find answers from the community

Updated 3 months ago

Setting custom output parser for structuredllm wraps sagemakerllm

At a glance

The community member is trying to set a custom OutputParser that inherits from PydanticOutParser to StructuredLLM that wraps SageMakerLLM. However, they found that the output parser is hard-coded to PydanticOutParser in the llama_index.llama-index-core.llama_index.core.program.utils.py file. The community member is unsure if this is a bug or if there is another way to set a custom OutputParser.

In the comments, another community member suggests that the community member could use LLMTextCompletionProgram themselves and customize the output parser there.

Hi everyone,

I trying to set a custom OutputPaser that inherits from PydanticOutParser to StructuredLLM that wraps SageMakerLLM,
But every time it set in output_parser to PydanticOutParser

I dug down the code and found that in the file llama_index.llama-index-core.llama_index.core.program.utils.py
the output parser is hard coded and set to PydanticOutParser

Is this a bug? or there is another way to set custom OutputPraser


snip of the code

return LLMTextCompletionProgram.from_defaults(
output_parser=PydanticOutputParser(output_cls=output_cls), # type: ignore
llm=llm,
prompt=prompt,
kwargs,
)

possible fix

return LLMTextCompletionProgram.from_defaults(
output_parser=PydanticOutputParser(output_cls=output_cls) if prompt.output_parser is None else prompt.output_parser, # type: ignore
llm=llm,
prompt=prompt,
kwargs,
)
L
1 comment
You could use LLMTextCompletionProgram yourself and customize the output parser there right?
Add a reply
Sign up and join the conversation on Discord