Find answers from the community

Updated 2 months ago

llama_index/llama_index/prompts/default_...

Hi all, if I don't configure any prompt template in query engine for RAG pipeline, will be use one default from here https://github.com/jerryjliu/llama_index/blob/main/llama_index/prompts/default_prompts.py ? How I know what prompt is used
L
x
5 comments
if response_mode == ACCUMULATE:
I can use refine_templat on responses array, to compose a new answer base on results? If Yes, using refine_templat will trigger a new call to LLM if I understand correctly ?
accumulate does not use the refine template

It just applies the text_qa_template to every chunk, and returns a concated list of all responses
so using ACCUMULATE I can only take responses and perform another query to LLM with new instructions, or there is another way to process all responses to a new result ?
You'd have to take the result and do something else with it.

Accumulate is mostly meant for for tasks where you want to apply the same query to every text chunk, and accumulate the response to each chunk
Add a reply
Sign up and join the conversation on Discord