Find answers from the community

Updated 3 months ago

Hi New to Llamaindex discord Running

Hi!, New to Llamaindex discord. Running into issues using LLMSingleSelector (running 0.8.8v). Code and a snip of trace below. Any help appreciated! thank you.

Plain Text
from llama_index.tools import ToolMetadata
from llama_index.selectors.llm_selectors import LLMSingleSelector


# choices as a list of tool metadata
choices = [
    ToolMetadata(description="I am choice 1", name="choice_1"),
    ToolMetadata(description="I am choice 2", name="choice_2"),
]

# choices as a list of strings
choices = ["choice 1 - description for choice 1", "choice 2: description for choice 2"]

selector = LLMSingleSelector.from_defaults()
selector_result = selector.select(choices, query="What choices do I have?")
print(selector_result.selections)


File ~/.pyenv/versions/llm-3.11.3/lib/python3.11/site-packages/llama_index/output_parsers/selection.py:56, in <listcomp>(.0)
54 if isinstance(json_output, dict):
55 json_output = [json_output]
---> 56 answers = [Answer.from_dict(json_dict) for json_dict in json_output]
57 return StructuredOutput(raw_output=output, parsed_output=answers)

File ~/.pyenv/versions/llm-3.11.3/lib/python3.11/site-packages/dataclasses_json/api.py:70, in DataClassJsonMixin.from_dict(cls, kvs, infer_missing)
65 @classmethod
66 def from_dict(cls: Type[A],
67 kvs: Json,
68 *,
69 infer_missing=False) -> A:
---> 70 return _decode_dataclass(cls, kvs, infer_missing)

File ~/.pyenv/versions/llm-3.11.3/lib/python3.11/site-packages/dataclasses_json/core.py:168, in _decode_dataclass(cls, kvs, infer_missing)
165 if not field.init:
166 continue
--> 168 field_value = kvs[field.name]
169 field_type = types[field.name]
170 if field_value is None:

KeyError: 'choice'
L
J
7 comments
This is becasue the LLM didn't output the proper JSON

Are you using a local LLM? If not, I highly recommend just using the pydantic selector

Further, the usage is not quite correct -- Selectors should take a query, and return which choice that best matches the query

With your query right now, there's no clear answer, which is probably also why it failed πŸ€”

In llama-index, selectors are used in routers, which control which index a query gets sent to
Thanks. Using "gpt-3.5-turbo". I am using the RouterQueryEngine with LLMSingleSelector, but kept bumping into the above issue. The above snippet helped narrow down.

Is it due to differences in json response from OpenAI? Should I try a different llm than 3.5-turbo?

will try the Pydantic selector.
I think the main error here is that the query you are asking the selector does not work for selecting a single index

For example, that query kind of implies that both selections should be selected.

There is also a multi selector (for pydantic and normal LLMs) that you may find helpful
Thank you. Trying the PydanticSingleSelector with same gpt-3.5-turbo, temperature=0, but appears to pick different choice between the notebook and IDE.

Is there any non-determinism that needs to be controlled? Thank you!

'''router_engine = RouterQueryEngine.from_defaults( selector=PydanticSingleSelector.from_defaults(
llm=OpenAI(temperature=0, model="gpt-3.5-turbo"),
verbose=True
),
query_engine_tools=[
product_tool,
fw_tool,
devices_tool
])'''
Even when you set the temperature to zero, LLMs are not entirely deterministic πŸ€” My only advice is to write better descriptions for your tools so that the LLM has an easier time deciding with tool to select?
It may also depend on the type of query you are asking -- if that query matches any one of the tools, you may see some non-determinism
Add a reply
Sign up and join the conversation on Discord