Find answers from the community

Updated 5 months ago

hi guys I have a problem here and spend

At a glance
hi, guys, I have a problem here, and spend much time on google but with no luck. I use this code response = index.query(prompt, response_mode="default") to query the response, but the response is not incomplete, it's only part of the hole result, what should I do to get the hole result? I have try to call response = index.query(":continue") too but the reuslt is not the rest part of the first one, How can I get around this? thank you very much.
L
j
i
19 comments
What do you mean by the response not being complete?

(Also, I wouldn't set the chunk_size_limit, it gets calculated automatically so sometimes setting it causes issues)
Thank you, I will have a try
I mean, the answer from GPT-3 have been truncated
as this image shows, I have set max_tokens to 3000, so, may be not the max_tokens problem?
Attachment
image.png
You need to set during query time
You mean this play?
Well I guess, actually are you getting a 3000 word response or is it shorter?
3000 is long enough, but I got less than 100
I will try to set max_token at the query time
HI, thank you, I use this code, it's works now
I have set the max_token of llm_predictor to 3000, it's works.
Thank you very much
Add a reply
Sign up and join the conversation on Discord