Hi everyone, I have seen condensed question + context mode for the chat engine in llama docs, but it's only available for OpenAI. Can somebody suggest if it's achievable for anthropic LLM and llama index?
I'm not using embeddings or any other vector stores as of now, as I'm new to LLMs and have built a lot of basic stuff without adding any complexity. But I would love to have some new suggestions and learnings.