Find answers from the community

Home
Members
KapilMalik
K
KapilMalik
Offline, last seen 2 months ago
Joined September 25, 2024
I am building a chat engine that should 1) Call llm to condense the previous chat history + current question into a new question, 2) query the index with condensed question to get the results, and then finally 3) Call llm with a system prompt, condensed question and query results. How can I do that with llama_index? Do any of the chat modes support both the condensing as well as a system prompt?
4 comments
K