Hi Everyone, Does llama index use semantic router to route queries or it still uses LLM generations to make tool-use decisions, semantic router is lot faster i guess, wondering if llama index has integration to it https://github.com/aurelio-labs/semantic-router
Its the same idea, so the performance will be similar yes. Semantic router lets you give multiple examples though, while the llama index router uses embeddings based on a single description