----------------------
The LLM selectors and Pydantic selectors in LlamaIndex operate differently in terms of how they handle data and interact with the system.
LLM selectors use the LLM (Language Model) to output a JSON that is parsed, and the corresponding indexes are queried. This means they use text completion endpoints to generate the necessary data for querying the indexes.
On the other hand, Pydantic selectors (currently only supported by
gpt-4-0613
and
gpt-3.5-turbo-0613
(the default)) use the OpenAI Function Call API to produce pydantic selection objects, rather than parsing raw JSON. This means they feed in pydantic objects to a function calling API to generate the necessary data for querying the indexes.
Here is an example of how you might define each type of selector:
from llama_index.core.selectors import LLMSingleSelector, LLMMultiSelector
from llama_index.core.selectors import (
PydanticMultiSelector,
PydanticSingleSelector,
)
# LLM selectors use text completion endpoints
# single selector (LLM)
selector = LLMSingleSelector.from_defaults()
# multi selector (LLM)
selector = LLMMultiSelector.from_defaults()
# pydantic selectors feed in pydantic objects to a function calling API
# single selector (pydantic)
selector = PydanticSingleSelector.from_defaults()
# multi selector (pydantic)
selector = PydanticMultiSelector.from_defaults()
For each type of selector, there is also the option to select 1 index to route to, or multiple.
Sources:
- LlamaIndex Docs ()
- LlamaIndex Module Guides ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------