Find answers from the community

Updated 3 months ago

Context

Hi, is it possible to ask ChatGPT without the context? In my case, sometimes there is no data provided but I still want to hear a reasonable answer (like greeting etc.). But when no node is found query_engine.query always returns the response object which is None, how to work around it? Thanks!
L
S
W
26 comments
I'm surprised the response returned None, that kind of seems like a setup issue maybe?

In any case, you can either use an agent
https://gpt-index.readthedocs.io/en/latest/core_modules/agent_modules/agents/modules.html

or modify the prompt templates to get more out-of-context responses
https://gpt-index.readthedocs.io/en/latest/core_modules/model_modules/prompts.html#modules
Well, I found one case when it's None. If I create data, load it into qDrant collection, then remove data. The collection is still there, but empty, then the response is None. Is something wrong here?
I still provide prompts and everything but if a collection is empty, the response is None. It's not none in any other case.
Ah that's fine. Querying an empty index means that the response synthesizer doesn't actually run

A custom retriever could maybe handle this. Return some dummy node when no nodes are retrieved
There is one more scenario in which response comes as none @Logan M
Suppose you set postprocessor for similarity and none of the nodes breach the threshold value. In that case also you get None
@Logan M I'm not sure how to "return" a dummy node, I'm not aware that no nodes used until the final response. I create the index object, then query it and it returns response None and empty nodes list.
If you don't want to go into custom retriever. You can always check
Plain Text
If  response.response:
      Do as it is
else:
      Return custom text like unable to find anything for your query
This is exactly what I'm doing but I'd like to keep the "smart" conversation, at least to respond "hi" when they say "hi" etc
If you want an actual conversation, it's probably best to use an agent

Or, you can call the llm directly when the response is None:
https://gpt-index.readthedocs.io/en/latest/examples/llm/openai.html
Oh, nice. Does it exist in 0.6.22? I can't move to a newer right now and I need a quick fix
ohhh 0.6.22 does not have that, but theres maybe something else, one sec
Plain Text
from llama_index.prompts import Prompt

response = service_context.llm_predictor.predict(Prompt("Hello!"))
Something like that might work
Cool, let me try it!!
It's working, so nice! Thanks a lot for your help!
@Logan M I also have one additional question. Prompt works great, answering in the language I was asking in. But with nodes, is there any way to provide the same behaviour - automatically translate the answer to the original question? Sometimes it works, sometimes doesn't, and I can't find a reliable way to provide it.
Try customizing the text_qa and refine templates like the two notebooks here do

https://gpt-index.readthedocs.io/en/latest/core_modules/model_modules/prompts.html#modules
You can modify the default templates with different instructions
I already did it saying "Always answer in the language the original question was in but it was very unreliable. Now I detect the question's language and say for example "Always answer in Ukrainian" but it's still not 100% reliable.
Is it a right approach or should be done differently, I mean the prompts themselves?
Yea prompt engineering is really the best approach tbh, but there's no way to guarantee 100% πŸ€”
@Logan M Thanks, working on prompts. I can't find the information what is the difference between text_qa_template and refine_template? They are pretty similar but a bit different. If I want to add the instruction on how to respond when the answer is unknown, which one should I use? Thanks!
No worries, found the answer 😊
Can't figure out though where to add the instruction what to say when answer is unknown. I'm just trying to add it to the end of the qa prompt but it's not working:
Plain Text
text_qa_template_str = (
    "Context information is below.\n"
    "---------------------\n"
    "{context_str}\n"
    "---------------------\n"
    "Using both the context information and also using your own knowledge, "
    "answer the question: {query_str}\n"
    "If the context isn't helpful, you can also answer the question on your own.\n"
    "If you don't know the answer say 'IdK'\n"
)
Also, how to remove the mention of the context? For example, it says:
Plain Text
Based on the provided context information, there is no mention of "Akruubombo." Therefore, it is not possible to determine how to start with Akruubombo.

doesn't look natural
Yea prompts are tricky. You probably want to add that to both the refine and qa prompts though
Add a reply
Sign up and join the conversation on Discord