Find answers from the community

Updated 3 months ago

Source Nodes

Hi, is there a parameter in the response from a query_engine that would allow me to know if the LLM has decided that the answer was present in the context? I would like to display the source nodes used to answer the query but only if the answer to this query was found in the context.
W
L
S
5 comments
I think you can use: https://github.com/run-llama/llama-hub/tree/main/llama_hub/llama_packs/fuzzy_citation


This will help you to find the exact part in the source node which has been used to form the response
The above should work quite well.

If not, there is also this, which gets the LLM to write an answer and also decide if that answer satisfies the query
https://docs.llamaindex.ai/en/stable/examples/response_synthesizers/structured_refine.html
@Logan M
Plain Text
structured_answer_filtering
looks the most promising! Can it be used with a query engine directly though? My code looks like:

Plain Text
query_engine = index.as_query_engine(
            text_qa_template=qa_template, 
            response_mode="compact", 
            structured_answer_filtering=True)
response = query_engine.query(prompt)
print(response)

but the response variable doesn’t seem to contain the
Plain Text
query_satisfied
argument.
do you know if fuzzy_citation needs a specific tokenizer, text splitter or llm in order to work properly? I’m using

Plain Text
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en", max_length=512)


for embeddings and zephyr-7b-beta for llm, but the extracted parts of the source node used for the response are always a bit off (and for some prompts, I get an
Plain Text
IndexError: list index out of range
error)
It doesn't use any specific tokenizer. But I encourage you to take a look at what it's doing and modify as you see fit
Add a reply
Sign up and join the conversation on Discord