Find answers from the community

Updated 8 months ago

I have a `GuardrailsOutputParser`

I have a GuardrailsOutputParser working when i pass it as an argument to my LlamaCPP constructor, but when i try to apply it separately via a QueryPipeline I can't quite figure out how to configure it. If i pass it the llm instance during parser construction i get:

AttributeError: 'LlamaCPP' object has no attribute '__call__'. Did you mean: '__class__'?

and when i don't pass an llm arg I get:

ValueError: API must be provided.

Not quite sure how to reproduce what the constructor version is doing by looking at the code, or if that is even feasible. Any advice would be appreciated.
a
1 comment
I just needed to customize the prompt template for the synthesis stage of my pipeline via the output_parser.format method. Works fine.
Add a reply
Sign up and join the conversation on Discord