Yes, I looked in to pydantic program. But then using a output_cls it errors on JSON encode. With errors like “extra data” or “expected semi column”. I’m not using OpenAI. Just a local hosted Llama chat 13b. Maybe the issue is that the input is also not json, but I thought it’s easier for the llm to process. I can try this as well.
This is the compatibility report for few open source LLMs that LlamaIndex has tested for different factors. For pydantic only zephyr and Starling are showing good result
Yeah as the feedback says this: Mistral seems slightly more reliable for structured outputs compared to Llama2. Likely with some prompt engineering, it may do better.