I'm trying to run the example on StructuredPlannerAgent(agent_worker=worker, tools=[lyft_tool, uber_tool], verbose=True) but I'm hitting this error ValueError: Model name anthropic.claude-3-sonnet-20240229-v1:0 does not support function calling API. I'm using Clude-3 sonnet via Bedrock and I'm seeing sonnet supports function calling, Any help on why I'm seeing this error. Any help will be greatly appreciated. Screenshot attached.
is there a way to extract entities as a seperate response (like a list) and not use it as a transformation pipeline for vector index ? any pointer to an example will be very helpful thx
hey guys when i try to init a pre-loaded qdrant vector store, i'm getting a ValueError, not sure how to tackle that problem, any help pls. attaching image for ref.
vector store has many flavours like pinecone, elasticsearch etc, is there a similar cloud flavour available for KeywordIndex, asking because since we're ingesting large quantities of data, local RAM approach wouldn't be a longterm approach right, or am i missing something ? sorry if this questions has been answered already ?
Oh there's a default timeout of 10s, in the workflow constructor, i see, interesting. since its async, there's a timeout. this could cause some trouble for larger projects right, especially when time consuming I/O is happening, any thoughts ?
In workflows, how can i dispatch an event (i.e using send_event) to two methods of same signature (add(self, ev: ProcessEvent) and multiply(self, ev: ProcessEvent)) ? I'm seeing all the examples have a StartEvent, 1 intermediate event of some method and finally a StopEvent. How can i actually control the flow somthing like a graph builder ?
Py:3.12 works fine for my current project. But when i change the version to Py3.9, I'm getting this error "TypeError: Can't instantiate abstract class BedrockConverse with abstract method _prepare_chat_with_tools" Reason is I'm trying to run index at scale using Python Ray and RayCluster is currently in 3.9 version. Any help on this pls
indexing using Neo4jPropertyGraphStore is taking long time, is that the case for other devs ? I'm using Claude-3 haiku btw which is pretty good at speed
I'm trying to create a new postprocessor: PrevNextNodePostprocessor(docstore=self.get_qdrant_vector_store(collection_name=collection_name), num_nodes=num_nodes)
But I'm getting the following error, Any help on this pls ? ValidationError: 1 validation error for PrevNextNodePostprocessor docstore instance of BaseDocumentStore expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseDocumentStore)
Question regarding RouterQueryEngine, the LLM response with Choice 2, but the router code is choosing the wrong agent (e.g: Selecting query engine 1: The question 'Sequenom has cash through what date?' is a company financial question, which is best answered using the approach described in choice 2. This choice is specifically mentioned as useful for answering questions about a company's cash, which is directly relevant to the given question..)
Has anyone seen this issue in that module (it was working fine so far with 2 agents, now when i introduce the third one its failing)
I have my RouterQueryEngine configured for sql and vector agent, there are many false positives of the prompt unnecesarily to going to the SqlAgent rather than vectorAgent, is there any way to improve the routing mechanism at the LLLM routing layer ? I changed the description and it made a really good progress, but still there are few that goes to the sqlAgent
currently i'm having my vector engine defined as this vector_qe = vector_index.as_query_engine(similarity_top_k=5, node_postprocessors=[node_postprocessor_fixed], vector_store_query_mode="hybrid"), is there a way to define a score cutoff like 0.1 (i.e removed nodes below 0.1 to be sent to the LLM for retrieval), something like a postprocessor ? sorry if its a stupid question