Hi , I got this error when running the subquestion query engine, i tried to debug on my own but it seems like the global_stack_trace became empty after the first subquestion query was completed. No idea how to go about fixing it.
Traceback (most recent call last): File "/opt/conda/envs/xxx/lib/python3.10/site-packages/llama_index/indices/query/base.py", line 23, in query response = self._query(str_or_query_bundle) File "/opt/conda/envs/xxx/lib/python3.10/site-packages/llama_index/query_engine/sub_question_query_engine.py", line 142, in _query qa_pairs_all = [ File "/opt/conda/envs/xxx/lib/python3.10/site-packages/llama_index/query_engine/sub_question_query_engine.py", line 143, in <listcomp> self._query_subq(sub_q, color=colors[str(ind)]) File "/opt/conda/envs/xxx/lib/python3.10/site-packages/llama_index/query_engine/sub_question_query_engine.py", line 238, in _query_subq with self.callback_manager.event( File "/opt/conda/envs/xxx/lib/python3.10/contextlib.py", line 135, in enter return next(self.gen) File "/opt/conda/envs/xxx/lib/python3.10/site-packages/llama_index/callbacks/base.py", line 169, in event event.on_start(payload=payload) File "/opt/conda/envs/xxx/lib/python3.10/site-packages/llama_index/callbacks/base.py", line 242, in on_start self._callback_manager.on_event_start( File "/opt/conda/envs/xxx/lib/python3.10/site-packages/llama_index/callbacks/base.py", line 105, in on_event_start parent_id = global_stack_trace.get()[-1] IndexError: list index out of range
Did anybody have any luck using StarChat for the LLM used in a query engine? Given the same query, StarChat is not answering the question at all while openai model provides one.
I woud like to create a custom agent using a local LLM. I have already implemented a local LLM using CustomLLM class. Is there a similar guide on implementing a custom agent?
Hello, did anybody else encounter error with the guidance question generator for the subquestion query engine?
Example of error: raise OutputParserException( llama_index.output_parsers.base.OutputParserException: Got invalid JSON object. Error: Expecting property name enclosed in double quotes: line 2 column 14 (char 15) while parsing a flow mapping in "<unicode string>", line 2, column 14: "items": [{{#geneach 'items' stop=']'}}{{#u ... ^ expected ',' or '}', but got '<scalar>' in "<unicode string>", line 3, column 48: ... ": "{{gen 'sub_question' stop='"'}}",
Anybody else facing issues with inconsistency with openai models? I've been using the sub question query engine and it was supposed to route questions to different tools, but this morning it started routing all queries only to one tool only despite not having changed the prompt or data
Hi guys, a really newbie question, but is there any way to get a query engine to assume a certain persona? In particular a sub-question query engine. I tried to include it in the prompt but the LLM doesn't really follow the persona instruction well. Was wondering if anyone had success providing a custom text_qa_template in the response synthesizer for a custom persona.