Find answers from the community

Updated 11 months ago

Was following the https://docs.

Was following the https://docs.llamaindex.ai/en/stable/examples/query_engine/sub_question_query_engine.html tutorial and received this error output:
Plain Text
**********
Trace: query
    |_query ->  6.062075 seconds
      |_templating ->  0.0 seconds
      |_llm ->  6.062075 seconds
**********
Traceback (most recent call last):
  File "S:\Gemini-Coder\local-indexer\cmd_local_index_chat.py", line 83, in <module>
    respnose = query_engine.query(
  File "C:\Users\thecr\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\base_query_engine.py", line 40, in query
    return self._query(str_or_query_bundle)
  File "C:\Users\thecr\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\query_engine\sub_question_query_engine.py", line 129, in _query
    sub_questions = self._question_gen.generate(self._metadatas, query_bundle)
  File "C:\Users\thecr\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\question_gen\llm_generators.py", line 78, in generate
    parse = self._prompt.output_parser.parse(prediction)
  File "C:\Users\thecr\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\question_gen\output_parser.py", line 13, in parse
    raise ValueError(f"No valid JSON found in output: {output}")
ValueError: No valid JSON found in output:   Understood! I'll do my best to help you with your questions and provide relevant sub-questions based on the tools provided. Please go ahead and ask your user question, and I'll generate the list of sub-questions accordingly.

I am using local embedding model and local language model, but everything else I kept the same. I didn't read anything bout linking a json file in that doc.
L
i
V
7 comments
The model flat-out refused to write any json lol
You local LLM might be quite, hmm, what's a diplomatic way of putting it? "Stupid" is the first word that comes into my mind. I hope your LLM doesn't get offended.
Have you checked out the list of open-source LLMs that have been tested with Llama Index before deciding to wire Llama Index to your locally-served inference server?
https://docs.llamaindex.ai/en/stable/module_guides/models/llms.html#open-source-llms
I've been using the zephyr-7b-beta model in that list, and it has been working OK for me.
If that still doesn't give any JSON output for you, you might wanna override the system prompt and emphasize on the point that it should output JSON.
I've been using Llama-2-7b-chat-hf, completely forgot about that list
I'll give zephyr-7b-alpha a shot
Add a reply
Sign up and join the conversation on Discord