Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated 4 months ago
0
Follow
Structured
Structured
Inactive
0
Follow
N
Niels
4 months ago
Β·
Does/will llama index use this?
https://openai.com/index/introducing-structured-outputs-in-the-api/
L
N
16 comments
Share
Open in Discord
L
Logan M
4 months ago
Already supported
L
Logan M
4 months ago
pip install -U llama-index-llms-openai
Plain Text
Copy
llm = OpenAI(... strict=True)
L
Logan M
4 months ago
Note that it's crazy slow though
L
Logan M
4 months ago
10s vs 1s for a small pydantic object
N
Niels
4 months ago
Ah good to know, does it do two calls under the hood?
N
Niels
4 months ago
Because we noticed that instructions + pydantic extracting in one call failed a lot for us
L
Logan M
4 months ago
Just a single api call. Not sure what it does behind that call though
N
Niels
4 months ago
Alright thanks π
L
Logan M
4 months ago
This would help it have 100% success
L
Logan M
4 months ago
But it will be quite a bit slower from what I've seen
N
Niels
4 months ago
Does OpenAIPydanticProgram use this by default?
L
Logan M
4 months ago
As long as you set strict=True in the llm, yes
I kept the default to be struct=False because of the latency
N
Niels
4 months ago
Ah okay cool π
N
Niels
4 months ago
Right now we do it in two calls, is your feeling that doing one call with strict=True would be quicker or is it really a 10x diff?
L
Logan M
4 months ago
It really seems like a 10x difference. But I encourage you to at least test it and see for your case
N
Niels
4 months ago
ππ»
Add a reply
Sign up and join the conversation on Discord
Join on Discord