Find answers from the community

Updated 2 months ago

@Logan M getting this error when running

getting this error when running LLMQuestionGenerator:
Plain Text
  SubQuestion expected dict not str (type=type_error)


from this code:
Plain Text
query= input("Query: ")

question_gen = LLMQuestionGenerator.from_defaults(llm=Settings.llm)

tool_choices = [
    ToolMetadata(
        name="vyos",
        description="Provides information about VyOS and its contents",
    ),
    ToolMetadata(
        name="ubuntu",
        description="Provides information about Ubuntu and its contents",
    ),
]

# Generate sub-questions
query_bundle = QueryBundle(query_str=query)
choices = question_gen.generate(tool_choices, query_bundle)
for choice in choices:
    print(choice)
L
B
4 comments
What llm are you using? It didn't generate a valid object for parsing
ahhh alright got it ill explore options if its the LLM causing the problem
@Logan M im using ollama but the problem persists is there something wrong on my side then? Im using llama3
Llama3 could very likely just be not following instructions. You could try setting it to json mode for the question generator

question_gen = LLMQuestionGenerator.from_defaults(llm=Ollama(...., json_mode=True)
Add a reply
Sign up and join the conversation on Discord