the composable graph code is not really updated anymore -- we need to either deprecate it or refactor some existing features to replace it
In any case, you can directly create a chat engine instead of using
as_chat_engine()
For example
from llama_index.chat_engine import CondenseQuestionChatEngine
chat_engine = CondenseQuestionChatEngine.from_defaults(query_engine)
Although tbh, I'm not sure if 0.6.20 even has the chat engine π
I guess you'll find out