Find answers from the community

Home
Members
ChicoButico
C
ChicoButico
Offline, last seen 2 months ago
Joined September 25, 2024
Hi, is there a way of doing isolated instrumentation? I have 1 Workflow and 1 old Query Pipeline. I want to only send traces/spans of my new Workflow and ignore the Query Pipeline ones. Thanks!
Hi guys, in an agent .chat() response, I'm trying to collect information about the steps executed by the agent. I mainly need to know the source of that response. Most of the time, I'm using high level APIs for simplicity, but if you think I have some way making this customization going to a low-level api, I'll be glad to hear from you.

When my agent calls a RetrieverQueryEngine, the response contains a source_nodes prop and this is great. I'm able to identify the nodes used to compose that response.

But when this agent does a call to another agent (let's say a sub-agent: I'm using it for function calling with ObjectIndex), I only get source for the first tool, nothing that the subagent calls is accessible in the final response.

I want to obtain the retrieved nodes from ObjectIndex and identify what function was called. I'll post an image in the thread for clarification.
6 comments
C
L
v
Hi guys, I'm facing a recurrent issue in an Agent I'm developing. This agent has a first LLM call to summarize the chat history, but sometimes even when the chat has only the first message and no history, the LLM returns a tool call with an input that changes completely the meaning of the query. Do you guys have any idea about how I can get more assertive on this issue? Is this something that I can fix with a system prompt? I'm trying to use gpt-3.5-turbo to keep costs down... I'll add some images to the thread that describes the issue with more details.
3 comments
L
C
Hi guys, I created an OpenAIAgent and I don't understand the last LLM call it is doing. This agent calls the tool and I'd be happy with that tool response to be the last one. But then there's a last LLM call that just has as an input the return from the tool. In this last LLM call, sometimes I get the same response as the tool, sometimes I get an alucination, like the agent asking if he can help with any thing more...
16 comments
r
C
L
t
Reading the docs about the Agentic Strategies, I created a RouterQueryEngine and that worked very well. But I ended up realziing I wanted to use a chat style, from chat engines or agents. So I wrapped that RouterQueryEngine in a QueryEngineTool and passed that into a OpenAIAgent. There's just this single tool for the agent, but the agent always has to make an LLM call to then decide to use this single tool. Is there any way of setting this QueryEngineTool as the default to be executed and skip this LLM call? Thanks!
5 comments
C
L
Is there a way of making ToolRetrieverRouterQueryEngine.query method working with FunctionTool? Because it tries to call a query_engine attribute in FunctionTool which doesn't exist... More details in the thread.
5 comments
C
Hi, I'm reading about SimpleKeywordTableIndex and doing tests with it. I've been using it with with PostgresIndexStore.

PostgresIndexStore is storing the index_store data correctly it seems, in a JSON format with a table object. But every time I call from_documents it goes through the process of executing the transformations again and store a new line in that table, even though the documents are the same.

Isn't that using the documents hash to prevent having to reindex and store the table index every time?
3 comments
C
L
Hi guys, what patterns should I study/look at if I want to build a chatbot that consumes dynamic data? By that I mean, data that is available in databases and not in static documents. I know that LlamaIndex have many different readers, but I'm thinking during a API rest call to my chatbot backend if calling api readers, populating those documents in the vector store and then querying the vector store is the right approach.
1 comment
L
QueryPipeline has run_multi_with_intermediates.

Is there any way to achieve the same with workflows?

run_multi_with_intermediates is a nice way to inspect the results of each node after the whole query had ran
2 comments
C
L
Hi guys, I'm starting to study Workflows so I migrate a current Query Pipeline I have for this new way the framework made available.

I'm seeing I can pass data between steps through Context. But what's the difference of doing that and setting a instance property to my workflow? The only guess I have is that a single instance of Workflow is supposed to be used for many different users. Is that the case? Thanks!
5 comments
L
C
Hi guys, the project I work on is about the creation of a chatbot assistant. Users will ask anything to it, but many of those things we shouldn't know how to reply, in fact, we should know how to reply for a specific set of subjects... What strategies do you guys use for identifying what the chatbot doesn't know about? Right know I'm trying using post processor reranking (FlagEmbedding with BAAI/bge-reranker-large) with a threshold, so if no documents match after that step, it means my chatbot doesn't know how to reply about that query. I have to tweak a bit more with that threshold because during my tests, sometimes the user query was supposed to match with a document, but then the reranking score is down the threshold and the chatbot says that it doesn't know about the subject of that query.

Do you guys know of more methods or strategies to dealing with restrictions of topics a chatbot knows about? Thanks!
5 comments
Y
v
C
Hi! Is it intentional that we can't pass custom objects params through a query pipeline run method? It accepts only serializable objects, when trying to pass a custom object, it fails in some inner calls to json.dump
1 comment
L
C
ChicoButico
·

Qgents

Hi guys. Agents are prepared to execute tools/functions, but what are the low-level api concepts that I have to look into if I want to bring that control to my application? My project today doesn't use agents, it uses query_pipeline. I want to start doing function calling and I want to go a bit lower level than agents, but also I don't want to code everything myself if I can count on lower level apis from llamaindex. Thanks!
1 comment
L
C
ChicoButico
·

Async

Hi! PydanticSingleSelector doesn't support async. Is there any alternative to this? Thanks!
4 comments
L
C
Hi, I'm struggling a lot to understand how I can update nodes in the database (i'm using PGVectorStore). For using update_ref_doc from VectorStoreIndex, I have to pass a Document as argument. But how can I recover a node from the database as a Document?
4 comments
C
L