I am building a chat engine that should 1) Call llm to condense the previous chat history + current question into a new question, 2) query the index with condensed question to get the results, and then finally 3) Call llm with a system prompt, condensed question and query results. How can I do that with llama_index? Do any of the chat modes support both the condensing as well as a system prompt?