Find answers from the community

Updated last year

Steuctured

Is it possible to use the structured output parser on a per-document basis rather than use it as an index?
As an example, I would like to define some number of response schemas to ask as individual questions against each document in my dataset to generate summaries for me to store in a structured table. It seemed like StructuredParsers seemed to be the best way to go about this, but it looks like they require an index to still be created rather than allowing me to define a context to supply
Is there an easy way of decoupling these parsers from an index, or should i just wrap everything in my own loop?
L
j
3 comments
You could use a pydantic program (or an LLMProgram if you aren't using OpenAI)

https://gpt-index.readthedocs.io/en/stable/examples/output_parsing/openai_pydantic_program.html
An LLMProgram? Is there any example using this class with LlamaCPP as that is my current LLM backend
Hmm, I don't actually see an example, rip. Best I got is this unit test: https://github.com/jerryjliu/llama_index/blob/8611c2f0f2a53e4d46f8d76d7be7485077bb206f/tests/program/test_llm_program.py#L27

And this class definition
https://github.com/jerryjliu/llama_index/blob/8611c2f0f2a53e4d46f8d76d7be7485077bb206f/llama_index/program/llm_program.py#L35

But tbh, relying on open-source models to produce structured outputs is going to be a pain, pretty unreliable in my experience
Add a reply
Sign up and join the conversation on Discord