Find answers from the community

Updated 4 months ago

Structured

L
N
16 comments
Already supported
pip install -U llama-index-llms-openai

Plain Text
llm = OpenAI(...  strict=True)
Note that it's crazy slow though
10s vs 1s for a small pydantic object
Ah good to know, does it do two calls under the hood?
Because we noticed that instructions + pydantic extracting in one call failed a lot for us
Just a single api call. Not sure what it does behind that call though
Alright thanks πŸ™‚
This would help it have 100% success
But it will be quite a bit slower from what I've seen
Does OpenAIPydanticProgram use this by default?
As long as you set strict=True in the llm, yes

I kept the default to be struct=False because of the latency
Ah okay cool πŸ˜„
Right now we do it in two calls, is your feeling that doing one call with strict=True would be quicker or is it really a 10x diff?
It really seems like a 10x difference. But I encourage you to at least test it and see for your case
πŸ‘ŒπŸ»
Add a reply
Sign up and join the conversation on Discord