So if you installed v0.4.x today, it would work? That would surprise me π€
Usually, over time, openai changes/updates their models. And sometimes for the worse. You might have to re-engineer your prompt template π€
(You might also want to pass in a refine prompt template with similar instructions)
I downgrade back to 0.4.40 and works fine so maybe i miss something
@Logan M Unfortunately, that doesn't seem to have helped. I regenerated the index after changing the code, but the results were pretty much the same.
Here's an example of it answering basically the same question with two different answers, both wrong π¦
Plus a random question for extra points π
I wondered if perhaps there was conflicting information in the documentation that could account for the dissimilarity, but I couldn't find any. The correct answer to the "values" question seem to be well defined
Well, that's concerning π
Any way you can package up an example and make a github issue with it?
The only thing I can think of to help the issue is to also create a refine template with similar instructions and pass that in as well
I'm not sure with your case either ππ
@jerryjliu0 just giving some visibility on some possibly degraded performance in 0.5.x compared to 0.4.x π€
So there is some issue with version 0.5 ? Need open bug on github ?
I mean, this kinda tells me there is something weird going on. If right now today, 0.4.x works and 0.5.x doesn't, that feels weird to me. Should definitely be investigated!
@moti.malka took a quick look at the screenshot. are you setting the chunk_size_limit in the ServiceContext? I noticed you only set it in the prompt helper
you'd need to set in the ServiceContext as well for us to chunk properly
@Logan M I feel like I'm missing something obvious here. I've been pretty much living under the rock of Wordpress-only development for the last ten years or so, so rather than keep wasting your time I'll go through some tutorials and try to inform myself better. If i can come up with the solution then, great; otherwise, I'll be back to ask π Thank you so much for your time!
No worries! Always around to try and help if something comes up! πͺπ
Hi @jerryjliu0
I remove the chunk_size_limit from the code and stiil It seems that the indexing process does not work well, so when I ask the index a question, it passes the entire document as context (I can see this by the number of tokens it used in the response)
In version 4 it works smoothly without a problem
Any idea?