Find answers from the community

Updated 3 months ago

Does anyone know how to fix the `maximum

Does anyone know how to fix the maximum recursion depth exceeded in comparison error in DocumentsSummaryIndex?
Plain Text
---------------------------------------------------------------------------
RecursionError                            Traceback (most recent call last)
Cell In[4], line 87
     84 _, _, scope_id = question
     85 if not '>' in scope_id or is_child_of_yes_scope(scope_id):
     86     #print(f"query scope_id={scope_id}, {question}")
---> 87     result: Result = query_engine_manager.execute_questions([question], None, None)           
     88     # Check the result and update yes_scopes if result starts with 'yes'
     89     if result.question_answer_pairs:

File ~/dev/airpunchai/annotation-assistant/app/llm/model_context.py:542, in QueryEngineManager.execute_questions(self, annotator_questions, taxonomy_map, flatten_org_taxonomy)
    539 logger.debug("query_template_size=%d", len(query_template))
    540 logger.debug("query_template=%s", query_template)
--> 542 query_result = self.query_engine.query(query_template)
    544 # Create a QuestionAnswer object for the current iteration
    545 qa = QuestionAnswer(
    546     question_id=question_id,
    547     taxonomy=taxonomy,
   (...)
    550     taxonomy_answer=[],  # Assuming this is an empty list for now, modify as needed
    551 )

File ~/Library/Caches/pypoetry/virtualenvs/chatgpt-retrieval-plugin-g8Qw76ZE-py3.10/lib/python3.10/site-packages/llama_index/indices/query/base.py:23, in BaseQueryEngine.query(self, str_or_query_bundle)
     21 if isinstance(str_or_query_bundle, str):
     22     str_or_query_bundle = QueryBundle(str_or_query_bundle)
...
    117 def __instancecheck__(cls, instance):
    118     """Override for isinstance(instance, cls)."""
--> 119     return _abc_instancecheck(cls, instance)

RecursionError: maximum recursion depth exceeded in comparison
L
j
M
18 comments
The traceback seems to be cut off in the middle. If I had to guess, this is probably a pydantic version issue πŸ€”
i use pydantic 1.x,not 2.x, would that the issue?
pydantic = "^1.10.5"
another question, I use doc summary to perform query. I thought the query context feeding into openAI would be the summarized text, but look at the log, it seems the text sent were nodes from original document. Is there a way to send the summarized text instead to LLM?
try install pydantic=1.10.12? (Again though, just a guess since the traceback is truncated in the middle, I have no idea where it's actually breaking)
The summarized text is just used for deciding which document to chose/query
actaully it is 1.10.12 now
Plain Text
 poetry show pydantic
 name         : pydantic                                                        
 version      : 1.10.12                                                         
 description  : Data validation and settings management using python type hints 

dependencies
 - typing-extensions >=4.2.0

required by
 - chromadb >=1.9
 - fastapi >=1.6.2,<1.7 || >1.7,<1.7.1 || >1.7.1,<1.7.2 || >1.7.2,<1.7.3 || >1.7.3,<1.8 || >1.8,<1.8.1 || >1.8.1,<2.0.0
 - langchain >=1,<2
 - langsmith >=1,<3
 - openapi-schema-pydantic >=1.8.2
how can i use just the summary text to llm. does doc summary provide a method to get the all the summaries?
mmm not really. I mean there may be a way, but it seems like it would be extremely hacky.

I'm not sure what your goal is -- but you could just generate the summaries yourself and not use the document summary index
my goal is to find alternative to tree index, so that the overal performance of my app is on par or better than tree index at 0.7.24. And I realize there is no single index can perform that well for me since 0.8. So I have to divide my questions into groups. For scopes questions, the requirement is to label the document from a predefined 200+ scopes, I need to test if the doc belong to scopes, e.g. Information Tech > Cloud Migration or Business Process > Project Management. The answer should reside in the summary of nodes or summary of the doc, may or may not in one paragrphy from the doc. The current 8.x tree index or auto-merging retriever giving me 20 times more wrong anwsers (e.g. they give 170 yes but only < 10 of those are correct answers.
OK, I will try to generate summary directly without using an index
Ha. I'm having the same error and came here to ask the same question
Can either of you provide a traceback that's not truncated?
Yep. Happy to
@Mike King hmmm weird error. I would start with a fresh venv maybe? Some package version somewhere may be out of whack
For context, I'm running in Colab. It was working fine then suddenly not working.
Yea not sure what to tell you. This is a super basic component in llama-index that it's failing on πŸ€” unit testing would definitely catch the issue if it was a library issue I think

My best guess is to try re-installing stuff on colab, or try running local. Must be an issue with some package version
Confirming that it worked fine once I switched to local. Good ole Google still messing up GenAI πŸ˜„. Thanks @Logan M
the error is in the dataclasses_json library, might be related to the versioning there πŸ€”
Add a reply
Sign up and join the conversation on Discord