I have a GuardrailsOutputParser working when i pass it as an argument to my LlamaCPP constructor, but when i try to apply it separately via a QueryPipeline I can't quite figure out how to configure it. If i pass it the llm instance during parser construction i get:
AttributeError: 'LlamaCPP' object has no attribute '__call__'. Did you mean: '__class__'?
and when i don't pass an llm arg I get:
ValueError: API must be provided.
Not quite sure how to reproduce what the constructor version is doing by looking at the code, or if that is even feasible. Any advice would be appreciated.