Find answers from the community

Updated 2 months ago

Error Processing Query: Pydantic Validation Error for TextNode

At a glance

A community member is facing an error when trying to query a vector store in a Supabase collection using the LlamaIndex library. The error message indicates that the input should be a valid string, but the input value is None. The community member is using the query_vector_store method to query the vector store through a FastAPI endpoint.

In the comments, another community member suggests that if the database was not created/populated by LlamaIndex, it may not work out of the box as LlamaIndex expects certain columns. They also mention that by default, LlamaIndex assumes the node is serialized into metadata, and if not, it looks for at least a text field.

The community members seem to have resolved the issue, with one of them saying "yeah it's working now, THank u my savior".

Useful resources
Hello LLama PPL , i am facing an annoying error:
Plain Text
ERROR:root:Error processing query: 1 validation error for TextNode
text
  Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.10/v/string_type

i could not know how even to debug it right: i am using this :

Plain Text
def query_vector_store(self, query: str, top_k: int = 15):
        retriever = VectorStoreIndex.from_vector_store(self.vector_store).as_query_engine(llm=self.get_llm, similarity_top_k=top_k)
        response = retriever.query(query)
        return response

to query a vector store that is in Supabase collection is already exist, i am using this method through fastapi endpoint
m
L
5 comments
the endpint :
Plain Text
@router.post("/ask-question/")
async def ask_question(query_text: str, file_id: int, user_id: str):
    """
    Endpoint to accept query text, file ID, and user ID, then return a response based on the RAG query engine.
    """
    try:
        # if not query_text.strip():
        #     raise HTTPException(status_code=400, detail="Query text cannot be empty.")
        processor = LLMProcessor(user_id=user_id, file_id=file_id)
    
        # Process the query using the RAG processor
        response = processor.query_vector_store(query_text)
        logging.info(f"Query response: {response}")

        return {"query": query_text, "response": response}
    except AttributeError as e:
        logging.error(f"Processor not ready: {str(e)}")
        raise HTTPException(status_code=500, detail=f"Processor not ready: {str(e)}")
    except Exception as e:
        logging.error(f"Error processing query: {str(e)}")
        raise HTTPException(status_code=500, detail=f"Error processing query: {str(e)}")
If the db was not created/populated by llamaindex, it will probably not work out of the box. It's expecting certain columns.
i see ... nice to know tnx
yeah it's working now, THank u my savior
Add a reply
Sign up and join the conversation on Discord