Find answers from the community

Updated 8 months ago

GitHub - aurelio-labs/semantic-router: S...

Hi Everyone, Does llama index use semantic router to route queries or it still uses LLM generations to make tool-use decisions, semantic router is lot faster i guess, wondering if llama index has integration to it
https://github.com/aurelio-labs/semantic-router
L
l
8 comments
No integration with semantic router, but we do have a router based on embeddings
imo embedding based routing is always going to be less reliable and more time consuming to configure
semantic router also uses embeddings so there's no difference be llama index router and semantic router in terms of performance ?
Its the same idea, so the performance will be similar yes. Semantic router lets you give multiple examples though, while the llama index router uses embeddings based on a single description
what's the best routing if not embeddings
Probably using an LLM to route. Slower, but more generlizable (assuming you have a decent LLM to use)
OpenAI function calling use LLM to do the routing ?
Which router in llama index use embedding and which use LLM routing
Add a reply
Sign up and join the conversation on Discord