CHAT_REFINE_PROMPT
index.query(...., refine_template=my_refine_template)
memory = ConversationBufferMemory(memory_key="chat_history") agent_chain = create_llama_chat_agent( toolkit, llm, memory=memory, verbose=True )
chat_agent
, how can I pass the refine_template to the chat_agent?query_configs = [ { ... "query_kwargs": { ... "refine_template": my_refine_template }, ... }, ... ]
toolkit = LlamaToolkit( index_configs=index_configs, graph_configs=graph_configs )
refine_template
?refine_template
at index_query_kwargs will cause error. Just adding it at query_configs
works.@JW .. this seems to answer the same problem i'm having: ChatGPT LLMPredictor returning responses like, "..There, the answer still stands as follows:..", and , "Based on the new context, ..." . (and its mentioning the context even though i'm adding the prompt refinement "Do not mention the context") . How do i integrate/solve this challenge - i'm simply doing "llama_response = index.query(prompt, response_mode="compact", service_context=service_context)"
refine_template=CHAT_REFINE_PROMPT
index.query(query_text, similarity_top_k=query_configs, response_mode="compact", text_qa_template=TEXT_QA_TEMPLATE, refine_template=CHAT_REFINE_PROMPT)