Find answers from the community

Home
Members
llm_dev
l
llm_dev
Offline, last seen 2 months ago
Joined September 25, 2024
has anyone seen this error with Qdrant, I'm trying to add new nodes to the index and the new data is showing this error ?
2 comments
L
l
anyone seeing this issue with NLTK. Resource punkt not found.
3 comments
L
l
l
llm_dev
·

Hey guys,

Hey guys,

I'm trying to run the example on StructuredPlannerAgent(agent_worker=worker, tools=[lyft_tool, uber_tool], verbose=True)
but I'm hitting this error ValueError: Model name anthropic.claude-3-sonnet-20240229-v1:0 does not support function calling API.
I'm using Clude-3 sonnet via Bedrock and I'm seeing sonnet supports function calling, Any help on why I'm seeing this error. Any help will be greatly appreciated. Screenshot attached.
12 comments
b
l
L
the model_name is set to None if its not found and it overrides the default in the base class, hence resulting in failure.
2 comments
l
L
is there a way to extract entities as a seperate response (like a list) and not use it as a transformation pipeline for vector index ? any pointer to an example will be very helpful thx
4 comments
l
C
L
have anyone used the latest Anthropic Claude3 using Bedrock, its compalining about context_size and complains abt some validation error
6 comments
L
l
hey guys when i try to init a pre-loaded qdrant vector store, i'm getting a ValueError, not sure how to tackle that problem, any help pls. attaching image for ref.
2 comments
l
L
l
llm_dev
·

spam alert

spam alert
1 comment
L
vector store has many flavours like pinecone, elasticsearch etc, is there a similar cloud flavour available for KeywordIndex, asking because since we're ingesting large quantities of data, local RAM approach wouldn't be a longterm approach right, or am i missing something ? sorry if this questions has been answered already ?
4 comments
l
L
Most of my response starts with
Plain Text
Based on the provided context,
is there an easy way to strip those words so that the results are cleaner ?
3 comments
a
l
when i click close, it goes to the top of the page, for longer pages i have to scroll again to the actual context
4 comments
L
l
Oh there's a default timeout of 10s, in the workflow constructor, i see, interesting. since its async, there's a timeout. this could cause some trouble for larger projects right, especially when time consuming I/O is happening, any thoughts ?
3 comments
l
L
In workflows, how can i dispatch an event (i.e using send_event) to two methods of same signature (add(self, ev: ProcessEvent) and multiply(self, ev: ProcessEvent)) ? I'm seeing all the examples have a StartEvent, 1 intermediate event of some method and finally a StopEvent. How can i actually control the flow somthing like a graph builder ?
12 comments
l
b
L
l
llm_dev
·

Agents

workflow and llama-agents, how they work together or seems workflow does the job of llama-agents, or am i wrong, little confusing ?
2 comments
l
L
this line is causing too much of trouble https://github.com/run-llama/llama_index/blob/15227173b8c1241c9fbc761342a2344cd90c6593/llama-index-core/llama_index/core/llms/function_calling.py#L125
I'm seeing this error in BedrockConverse TypeError: Can't instantiate abstract class BedrockConverse with abstract method _prepare_chat_with_tools
Then if i do pip install --upgrade llama-index-llms-bedrock-converse, the problem goes away. anyone seeing this issue.
this becomes a problem in CI process in pip install requirements file
7 comments
l
L
Py:3.12 works fine for my current project. But when i change the version to Py3.9, I'm getting this error "TypeError: Can't instantiate abstract class BedrockConverse with abstract method _prepare_chat_with_tools" Reason is I'm trying to run index at scale using Python Ray and RayCluster is currently in 3.9 version. Any help on this pls
11 comments
L
l
l
llm_dev
·

Graph

indexing using Neo4jPropertyGraphStore is taking long time, is that the case for other devs ? I'm using Claude-3 haiku btw which is pretty good at speed
2 comments
l
L
Hey guys, any help on KG pls

In the given line for KnowledgeGraphQueryEngine, the graph_query_synthesis_prompt is an optional paramter
https://github.com/run-llama/llama_index/blob/01e5173f8a272e8b7e5ccb2ae3ff215eb6c4ca6a/llama-index-core/llama_index/core/query_engine/knowledge_graph_query_engine.py#L70

but at line 132
https://github.com/run-llama/llama_index/blob/01e5173f8a272e8b7e5ccb2ae3ff215eb6c4ca6a/llama-index-core/llama_index/core/query_engine/knowledge_graph_query_engine.py#L132

its a mandatory argument for _llm.predict, I'm seeing an error, when i run query_engine.query("sample query ?")

Any help here pls
2 comments
l
L
I'm trying to create a new postprocessor:
PrevNextNodePostprocessor(docstore=self.get_qdrant_vector_store(collection_name=collection_name), num_nodes=num_nodes)

But I'm getting the following error, Any help on this pls ?
ValidationError: 1 validation error for PrevNextNodePostprocessor
docstore
instance of BaseDocumentStore expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseDocumentStore)
5 comments
l
L
Question regarding RouterQueryEngine, the LLM response with Choice 2, but the router code is choosing the wrong agent (e.g: Selecting query engine 1: The question 'Sequenom has cash through what date?' is a company financial question, which is best answered using the approach described in choice 2. This choice is specifically mentioned as useful for answering questions about a company's cash, which is directly relevant to the given question..)

Has anyone seen this issue in that module (it was working fine so far with 2 agents, now when i introduce the third one its failing)
2 comments
l
L
I have a zoom chat file of conversation, is there an example on how to summarize the document (its a pretty big document) ? or any ideas pls ?
2 comments
l
T
fastembed is breaking for QdrantVectorStore. I did pip install on the entire project dependencies and its failing.
17 comments
L
M
l
how can i pass a filter in a RouterQueryEngine ? any example pls
6 comments
l
L
l
llm_dev
·

Routing

I have my RouterQueryEngine configured for sql and vector agent, there are many false positives of the prompt unnecesarily to going to the SqlAgent rather than vectorAgent, is there any way to improve the routing mechanism at the LLLM routing layer ? I changed the description and it made a really good progress, but still there are few that goes to the sqlAgent
2 comments
l
L
currently i'm having my vector engine defined as this vector_qe = vector_index.as_query_engine(similarity_top_k=5, node_postprocessors=[node_postprocessor_fixed], vector_store_query_mode="hybrid"), is there a way to define a score cutoff like 0.1 (i.e removed nodes below 0.1 to be sent to the LLM for retrieval), something like a postprocessor ? sorry if its a stupid question
2 comments
l
T