Find answers from the community

Updated 3 months ago

lama index output parsers base

lama_index.output_parsers.base.OutputParserException: Got invalid JSON object. Error: Extra data: line 15 column 1 (char 423) while scanning for the next token
found character '`' that cannot start any token
in "<unicode string>", line 15, column 1:
L
m
7 comments
hmmm that's kind of weird. It should be parsing out the json properly
seems correct in the code, maybe it's because I'm using offline Llama-2 through HuggingFaceLLM if it has anything to do? I'm having now trouble also with RouterQueryEngine to invoke the LLMSingleSelector using LLMPredictor made out of this HuggingFaceLLM. Maybe there are some tutorials, instructions or whatever how to use LLamaIndex with LLama-2 with these nice features like subquery/router query engines?
Usually the tools that use output parsing (like all the ones you listed here) require pretty smart LLMs. In general, open-source LLMs are not great for this yet πŸ˜…

You can customize the underlying selectors/question generators, but there's no tutorial (yet) on that. You kind of have to track things down in the code and subclass the respective generator/selector so that it works in a way you want
yeah, unfortunately I've just noticed this, btw. there used to be a class called HuggingFaceLLMPredictor that was deprecated, however the migration guide is gone now (404 link), do you know if there is any way to initalize LLMPredictor to somehow treat input HuggingFaceLLM as some kind of special LLM to be able to predict? It's giving me empty JSONs as output. Maybe it's the LLM or maybe it's the predictor.
ah, actually you can create a HuggingFaceLLM and pass it in directly, that's basically the extent of the migration

Plain Text
llm = HuggingFaceLLM(...)
service_context = ServiceContext.from_defaults(llm=llm) 


Not sure what you mean by "special LLM" though πŸ€”
Llama 7B was just too weak indeed and it generated JSONs however it wanted and not according to LLMSelector instructions. Llama 13B handles it well enough. So it was actually the reason behing both subquestion query engine and router query engine
Add a reply
Sign up and join the conversation on Discord