Find answers from the community

Updated 3 months ago

Hi everyone, I have seen condensed

Hi everyone, I have seen condensed question + context mode for the chat engine in llama docs, but it's only available for OpenAI.
Can somebody suggest if it's achievable for anthropic LLM and llama index?

I'm not using embeddings or any other vector stores as of now, as I'm new to LLMs and have built a lot of basic stuff without adding any complexity. But I would love to have some new suggestions and learnings.
L
2 comments
You can use any LLM πŸ‘€ But to do retrieval, you'll also need to use embeddings
We support a ton of different embedding and llm providers
Add a reply
Sign up and join the conversation on Discord