Find answers from the community

Home
Members
Arunchandra
A
Arunchandra
Offline, last seen 3 months ago
Joined September 25, 2024
Hey @Logan M! I was trying to add a metadata filter
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor,embed_model=embeddings) vector_store = QdrantVectorStore(client=client, collection_name="check") index = VectorStoreIndex.from_vector_store(vector_store=vector_store,service_context=service_context) filters = MetadataFilters(filters=[ExactMatchFilter(key="filename", value=f'{file_name.split("/")[-response_synthesizer = get_response_synthesizer(response_mode=mode,use_async=async_mode,service_context=service_context) query_engine = index.as_query_engine(response_synthesizer=response_synthesizer,similarity_top_k=count,filters=filters) response = query_engine.query(question)


Using a Qdrant vector database running on docker.
Facing the below error when i query the index for few specific questions:
maximum recursion depth exceeded in comparison
1 comment
L
@kapa.ai Is there agent for AzureOpenai like we have for Openai ( from llama_index.agent import OpenAIAgent ) ?
12 comments
L
A
k
Hello, everyone. I'm in the process of building a chatbot that needs to query a knowledge base. I have created embeddings for my documents and stored them in qdrand. I've tried using the condense_question chat mode, but when I attempt to query the same question multiple times consecutively, the question changes each time I ask it. This variation affects the similarity search and, consequently, impacts my response. Do you have any suggestions to address this issue?
3 comments
L
A