Find answers from the community

Updated 3 months ago

Hi team, had asked the same question on

Hi team, had asked the same question on Monday as well but did not receive any response so just bringing it up again :-

Hi everyone, getting the following error sometimes [No idea on how this is reproducable as haven't been able to do so] , I think this has started coming ever since we upgraded the llama_index version, can someone help with a solution? ?:-

Error occurred in get_answer: 1 validation error for VectorStoreQuerySpec
root
VectorStoreQuerySpec expected dict not NoneType (type=type_error)
2
T
r
s
10 comments
So it doesn't happen anymore? Which versions are you on for Llamaindex, Langchain and Pydantic? Are you using a venv?
@sahilthakur3 which version are you on?
can you please share the code where you are getting an error.
This is a part of the function -

def _retrieve(self, query_bundle: QueryBundle) -> List[NodeWithScore]: info_str = self._vector_store_info.json(indent=4) schema_str = VectorStoreQuerySpec.schema_json(indent=4) output = self._service_context.llm_predictor.predict( self._prompt, schema_str=schema_str, info_str=info_str, query_str=query_bundle.query_str, ) assert self._prompt.output_parser is not None try: structured_output = cast( StructuredOutput, self._prompt.output_parser.parse(output) ) query_spec = cast(VectorStoreQuerySpec, structured_output.parsed_output) except OutputParserException: logger.info("Failed to parse query spec, using defaults as fallback.") query_spec = VectorStoreQuerySpec( query=query_bundle.query_str, filters=[], top_k=None, )
langchain == 0.0.304
llama_index===0.8.38
are you using VectorIndexAutoRetriever?
Yes @ravitheja , we are
@ravitheja @Teemu
@sahilthakur3 seems like an error with the LLM, it's not creating a valid query spec

Seems like the LLM is just hallucinating or something and not writing the output JSON properly
Let me check the knowledge base for you. One moment! :books:
----------------------
To display the filters extracted while using VectorIndexAutoRetriever, you can use the get_content method with the metadata_mode parameter set to "all". This will return the content of the node along with all the metadata, which includes the filters used.

Here is an example of how to do this:

Plain Text
for node in nodes:
    print(node.get_content(metadata_mode="all"))


In this example, nodes is the result of a retrieval operation. Each node in nodes represents a piece of content retrieved from the vector store, and its associated metadata (which includes the filters).

Please note that the actual filters used in the retrieval operation are determined by the LLM (Language Model) based on the query and the VectorStoreInfo provided when creating the VectorIndexAutoRetriever.

For more details, you can refer to the following sources:
  • Auto Retriever (with Pinecone + Arize Phoenix) ()
  • Auto-Retrieval from a Vector Database ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord