MetadataFilters
. However, I faced an error which leads me to suspect there might be a bug in the piece of code that translates these abstractions to Qdrant's implementation.MetadataFilters( filters=[ MetadataFilters( filters=[ MetadataFilter( key="A", operator=FilterOperator.EQ, value=username, ), MetadataFilter( key="B", operator=FilterOperator.EQ, value=role, ), ], condition=FilterCondition.OR, ), MetadataFilter( key="C", operator=FilterOperator.NE, value=username ), ], condition=FilterCondition.AND, )
OpenAiAgent
with llms from OpenAI and, in some cases, AzureOpenAI. Now I would like to be able to use Agents outside OpenAI's ecocystem, like Calude, Bedrock deploys and so on. I couldn't find a "generic" way of building an Agent with other LLMs, I understand the possibility is also constraint in some cases with the capability of function calling, but maybe I didn't search enough. Have anyone faced a similar challenge?Retriever
with a custom MetadataFilter and I noticed (bc of error logged :p) that its not currently supported by the underlying Faiss VectorStore. Has someone faced something like this? I know one approach would be to change the Faiss Vector Store for other one, but I would like to keep it. Maybe someone stumbled across this 👀AzureOpenAI
class to perform a simple request. The problem is that even if I define the max_retries=0
parameter, it doesn't take effect when a BadRequestException occurs due to Azures [hate/violence/sexual/self-harm] filteringv0.11.0
. Migrating Pydantic v1 -> v2 now root_validator is deprecated and model_validator should be used instead. This changes was not performed within llama-index-embeddings-azure-openai
package, so when importing the AzureOpenAIEmbedding
class the following error raises:File "[...]/llama_index/embeddings/azure_openai/base.py", line 4, in <module> from llama_index.core.bridge.pydantic import Field, PrivateAttr, root_validator ImportError: cannot import name 'root_validator' from 'llama_index.core.bridge.pydantic' ([...]/llama_index/core/bridge/pydantic.py)
File ".../python3.10/site-packages/llama_index/core/callbacks/token_counting.py", line 91, in get_llm_token_counts raise ValueError( ValueError: Invalid payload! Need prompt and completion or messages and response.
raise_error
argument is set to true. How should I set this flag when doing agent.stream_chat(user_prompt)
🤔