Find answers from the community

Updated 3 months ago

Responses

Hi
I have issue (after upgrading from 0.4 > 0.5.18 )
It looks like the response did not come from the right context
and the response takes a lot of time (almost 1 minute), I have 1 semple doc text file with 10 questions (in V 4.0 everything was good ) can you advise?
Here is my code :
Attachment
image.png
1
L
m
V
18 comments
So if you installed v0.4.x today, it would work? That would surprise me πŸ€”

Usually, over time, openai changes/updates their models. And sometimes for the worse. You might have to re-engineer your prompt template πŸ€”
(You might also want to pass in a refine prompt template with similar instructions)
I downgrade back to 0.4.40 and works fine so maybe i miss something
@Logan M Unfortunately, that doesn't seem to have helped. I regenerated the index after changing the code, but the results were pretty much the same.
Attachment
image.png
Here's an example of it answering basically the same question with two different answers, both wrong 😦
Attachment
image.png
Plus a random question for extra points πŸ™‚
I wondered if perhaps there was conflicting information in the documentation that could account for the dissimilarity, but I couldn't find any. The correct answer to the "values" question seem to be well defined
Attachment
image.png
Well, that's concerning πŸ™ƒ

Any way you can package up an example and make a github issue with it?

The only thing I can think of to help the issue is to also create a refine template with similar instructions and pass that in as well
I'm not sure with your case either 😞😞

@jerryjliu0 just giving some visibility on some possibly degraded performance in 0.5.x compared to 0.4.x πŸ€”
So there is some issue with version 0.5 ? Need open bug on github ?
I mean, this kinda tells me there is something weird going on. If right now today, 0.4.x works and 0.5.x doesn't, that feels weird to me. Should definitely be investigated!
thanks for surfacing
@moti.malka took a quick look at the screenshot. are you setting the chunk_size_limit in the ServiceContext? I noticed you only set it in the prompt helper
you'd need to set in the ServiceContext as well for us to chunk properly
@Logan M I feel like I'm missing something obvious here. I've been pretty much living under the rock of Wordpress-only development for the last ten years or so, so rather than keep wasting your time I'll go through some tutorials and try to inform myself better. If i can come up with the solution then, great; otherwise, I'll be back to ask πŸ™‚ Thank you so much for your time!
No worries! Always around to try and help if something comes up! πŸ’ͺπŸ™
Hi @jerryjliu0
I remove the chunk_size_limit from the code and stiil It seems that the indexing process does not work well, so when I ask the index a question, it passes the entire document as context (I can see this by the number of tokens it used in the response)
In version 4 it works smoothly without a problem
Any idea?
Here my new code:
Attachment
image.png
Add a reply
Sign up and join the conversation on Discord