Well, I found one case when it's None. If I create data, load it into qDrant collection, then remove data. The collection is still there, but empty, then the response is None. Is something wrong here?
I still provide prompts and everything but if a collection is empty, the response is None. It's not none in any other case.
Ah that's fine. Querying an empty index means that the response synthesizer doesn't actually run
A custom retriever could maybe handle this. Return some dummy node when no nodes are retrieved
There is one more scenario in which response comes as none @Logan M
Suppose you set postprocessor for similarity and none of the nodes breach the threshold value. In that case also you get None
@Logan M I'm not sure how to "return" a dummy node, I'm not aware that no nodes used until the final response. I create the index object, then query it and it returns response None and empty nodes list.
If you don't want to go into custom retriever. You can always check
If response.response:
Do as it is
else:
Return custom text like unable to find anything for your query
This is exactly what I'm doing but I'd like to keep the "smart" conversation, at least to respond "hi" when they say "hi" etc
Oh, nice. Does it exist in 0.6.22? I can't move to a newer right now and I need a quick fix
ohhh 0.6.22 does not have that, but theres maybe something else, one sec
from llama_index.prompts import Prompt
response = service_context.llm_predictor.predict(Prompt("Hello!"))
Something like that might work
It's working, so nice! Thanks a lot for your help!
@Logan M I also have one additional question. Prompt works great, answering in the language I was asking in. But with nodes, is there any way to provide the same behaviour - automatically translate the answer to the original question? Sometimes it works, sometimes doesn't, and I can't find a reliable way to provide it.
You can modify the default templates with different instructions
I already did it saying "Always answer in the language the original question was in but it was very unreliable. Now I detect the question's language and say for example "Always answer in Ukrainian" but it's still not 100% reliable.
Is it a right approach or should be done differently, I mean the prompts themselves?
Yea prompt engineering is really the best approach tbh, but there's no way to guarantee 100% π€
@Logan M Thanks, working on prompts. I can't find the information what is the difference between text_qa_template and refine_template? They are pretty similar but a bit different. If I want to add the instruction on how to respond when the answer is unknown, which one should I use? Thanks!
No worries, found the answer π
Can't figure out though where to add the instruction what to say when answer is unknown. I'm just trying to add it to the end of the qa prompt but it's not working:
text_qa_template_str = (
"Context information is below.\n"
"---------------------\n"
"{context_str}\n"
"---------------------\n"
"Using both the context information and also using your own knowledge, "
"answer the question: {query_str}\n"
"If the context isn't helpful, you can also answer the question on your own.\n"
"If you don't know the answer say 'IdK'\n"
)
Also, how to remove the mention of the context? For example, it says:
Based on the provided context information, there is no mention of "Akruubombo." Therefore, it is not possible to determine how to start with Akruubombo.
doesn't look natural
Yea prompts are tricky. You probably want to add that to both the refine and qa prompts though