Find answers from the community

s
F
Y
a
P
N
Nehil
Offline, last seen last month
Joined September 25, 2024
I get this error while querying pinecone every so often, its not 100% of the time.

Any help will be appreciated.
1 comment
N
Where can I find good examples of llamaindex and fastapi working together?
1 comment
L
How to reuse already created indexes properly?

I have a function

Plain Text
def index_profiles_data(
    pinecone_index,
    nodes: Optional[List[BaseNode]] = None,
    namespace: str = "test_namespace",

) -> VectorStoreIndex:
    """Indexes the profiles data using a VectorStore.
    Args:
        nodes: List[BaseNode]: List of nodes to index.
        pinecone_index: Pinecone: Pinecone index object.
        namespace: str: Namespace for the Pinecone vector store.
    Returns:
        VectorStoreIndex: Index that can be used for retrieval and querying.
    """
    # Setup Pinecone Vector Store
    vector_store = PineconeVectorStore(
        pinecone_index=pinecone_index, namespace=namespace
    )
    if nodes:
        storage_context = StorageContext.from_defaults(
            vector_store=vector_store
        )
        # Create Vector Store Index
        vector_store_index = VectorStoreIndex(nodes=nodes, storage_context=storage_context)

    else:
        vector_store_index = VectorStoreIndex.from_vector_store(
            vector_store=vector_store
        )
    logging.info("Vector store index created and ready for use.")
    return vector_store_index


When I pass in the nodes, it works well and I am able to retrieve things. When i dont pass in nodes and use from vector_store it to return anything. How can I resolve this?
2 comments
N
L

Need help figuring out why Pinecone VectorIndexAutoRetriever is not working.


#### Background
I am working on a project using the LlamaIndex package integrating with Pinecone for my index. The project involves creating a VectorIndexAutoRetriever to set up a retriever similar to the example in https://docs.llamaindex.ai/en/stable/examples/vector_stores/pinecone_auto_retriever/

#### Problem Description
During the implementation, I encountered a ValueError stating that the vector store only supports exact match filters. Th error message is:
Plain Text
ValueError: Vector Store only supports exact match filters. Please use ExactMatchFilter or FilterOperator.EQ instead.

This error occurs when I attempt to use filters with operators other than exact matches (e.g., FilterOperator.LTE).

#### Investigation
  1. Filter Setup: The LLM gives the right translation from query to metadata filter. The query is Who are some founders that are 30 or below
    Plain Text
    MetadataFilters(filters=[MetadataFilter(key='age', value=30, operator=<FilterOperator.LTE: '<='>)], condition=<FilterCondition.AND: 'and'>)

    Pinecone also supports this operation '<=' as described by the code below:
    Plain Text
    def _transform_pinecone_filter_operator(operator: str) -> str:
        """Translate standard metadata filter operator to Pinecone specific spec."""
        print(f"DEBUG: The incoming operator is {operator} from the code")
        ...
        elif operator == ">=":
            return "$gte"
        elif operator == "<=":
            return "$lte"
        ...

    GitHub link
What am I doing wrong? Is only exact match filtering supported? I can help add documentation but currently its pretty confusing to debug further for me.
5 comments
L
N
Is it possible to get PydanticProgram working with Ollama llama3? With instructor its super easy to do https://python.useinstructor.com/hub/ollama/#patching/. I have a loading pipeline with uses pydanticextractor. I want to reduce cost by moving to local models.
12 comments
A
L