Find answers from the community

Updated 7 months ago

Hi Everyone, is there a way through

Hi Everyone, is there a way through which i can see what context was fetched and what's the formatted prompt when i run
query_engine.query(".....")
L
l
7 comments
the response object has it

response.source_nodes
I'm doing this response is a string
Plain Text
query_engine = SubQuestionQueryEngine.from_defaults(
    query_engine_tools=query_engine_tools,
    use_async=True
    
)
response = query_engine.query(...)
I can almost guarantee its not a string

Try print(type(response)) or print(response.source_nodes)
yeah sorry my mistake
I am specifically trying to understand what the synthesize step does once the query is answerd what's the next context ?

Also once both subquestion are answered what's the input into this synthesize prompt for query_str, context_msg, existing_answer
Plain Text
Prompt Key: response_synthesizer:refine_template
Text:

The original query is as follows: {query_str}
We have provided an existing answer: {existing_answer}
We have the opportunity to refine the existing answer (only if needed) with some more context below.
------------
{context_msg}
------------
Given the new context, refine the original answer to better answer the query. If the context isn't useful, return the original answer.
Refined Answer: 
Attachment
CleanShot_2024-05-14_at_11.07.242x.png
If more context is retrieved than can fit into a single LLM call, then it has to refine an answer with the remaining context and previous answer

in this case, that template was never used. Each sub-question had a single LLM call, then there was one final LLM call to "agregate" those two responses
I see thanks a lot. I'm deep into llama index now, building a scaleable RAG after failing with naive RAG of openai chat retireval plugins.
Add a reply
Sign up and join the conversation on Discord