bit of a hail mary here, just throwing this out there incase any devs know what might be going on:
in short: the validate_model function for the custom llama index BaseModel (in the deepest darkest depths of the codebase) is since recently somehow destroying perfectly good pydantic output responses (FYI: from a RetrieverQueryEngine with a fairly complex output_cls) at the eleventh hour. I can see that the output is there from the LLM before it hits the validation function, it even gets registered correctly in the event callback and i see the correct outputs in Arize phoenix. But the response from the query engine is just a blank BaseModel.
I can't say rn in exactly what version this cropped up, but certainly somewhere between 0.9 and 0.10. it's a bit of a labyrinth in there, imma write up a proper bug report and create a MWE when I have the time, but taking a shot at fixing it in the meantime as working to a tight deadline.