Find answers from the community

Home
Members
pavan._01
p
pavan._01
Offline, last seen last month
Joined September 25, 2024
Hello @Logan M , @WhiteFang_Jr , @ravitheja , @Seldo .

I have a question regarding the use of two FunctionCallingAgents within a single workflow ?

Here's the scenario:
I’ve created Agent A, which decides which event to execute (Event A or Event B). Now, Event B utilizes another FunctionCallingAgent. Is this approach valid?

I’ve attempted this setup, but it’s not functioning as expected. I’d like to confirm whether this is a feasible implementation or if adjustments are needed.

My aim to implement a multi-agent system within a workflow, where Agent A interacts with Agent B. Could you share any relevant resources or examples to guide me on this?

Looking forward to your inputs !
5 comments
L
p
Hello Guys @WhiteFang_Jr @Logan M ,

I’m working on memory management using LlamaIndex and have created an agent (Function Calling Agent). I need your help in choosing the best approach for my use case.

In LlamaIndex, is it possible to store previous questions and answers as memory within an agent?

For example:
Question: What happens if you multiply 2 and 3?
Answer: Provided by the agent.

Can we store this interaction in the agent's memory before asking another question, so the agent retains the context for future queries?
5 comments
W
L
p
Hello , ,

I am looking to integrate a prompt framework for my Agent [Function Calling Agent Worker]. It would be great if you could suggest a suitable option.

I have started working with Promptify but encountered version compatibility issues between LlamaIndex [0.10.27] and Promptify [2.0.23].

Specifically:
  • LlamaIndex core requires openai>=1.1.0, while Promptify supports 0.27.0.
  • LlamaIndex core requires 4.66.1 <= tqdm <= 5.0.0, but Promptify supports 4.65.0.
Our agent code is ready, but I need a robust prompt library or framework to integrate with LlamaIndex. Your suggestions would be greatly appreciated.
6 comments
p
W
k
Hello ,

I have a question regarding the difference between a function output and an LLM response.

Here's my current understanding:
I'm utilizing some advanced RAG techniques for building my agent, specifically the Function Calling Agent Worker. This is followed by the AgentRunner, which processes the query.

From my perspective, the function output refers to the result generated by the Function Calling Agent. In other words, it's the direct output produced by the specific function executed within the agent framework.

On the other hand, the LLM response involves taking this function output and sending it to the Language Model (LLM) defined within the Function Calling Agent Worker. The LLM then processes this information and generates a final response based on the function output.

Could you please clarify if my understanding is correct?

Any additional insights or corrections would be greatly appreciated.
3 comments
L
k
Hello , ,

I am experimenting with prompt techniques and had success using the QueryEngine.

Here is the code I used:
retriever_engine = RetrieverQueryEngine(
retriever=retriever,
response_synthesizer=response_synthesizer
)
retriever_response = retriever_engine.get_prompts()
prompt = display_prompt_dict(retriever_response)

However, now I need to get the prompt template for agents.
I am using the following code:

FunctionCallingAgentWorker, AgentRunner
prompt_response = agent.get_prompts()

When it comes to the agent, I am unable to retrieve the prompt template.

Do you have any suggestions?
13 comments
L
p
k
Hello @kapa.ai, @Logan M , @Seldo .
Reference: https://docs.llamaindex.ai/en/stable/examples/workflow/long_rag_pack/
I'm currently working on a PoC for LongRAG. Can we use Prompt Templates in LongRAG?
If so, could you please share any references or resources on how to use prompt templates within LongRAG?
4 comments
p
k
Hi @WhiteFang_Jr , @Logan M , @kapa.ai

I'm having trouble retrieving source_nodes from my agent (Function Calling Agent Worker and Agent Runner). The code below was working previously, but now it fails to return the relevant nodes. I am not understanding the reason for it .

Code :-
def get_text_nodes(
nodes: List[NodeWithScore],
) -> List[NodeWithScore]:
text_nodes = []
for res_node in nodes:
text_nodes.append(res_node)
return text_nodes

def relevant_nodes(response: AGENT_CHAT_RESPONSE_TYPE):
text_nodes = get_text_nodes(response.source_nodes)
# Extract the nodes from here,

response = agent.chat(query)
relevant_docs = relevant_nodes(response=response)

The issue seems to be with extracting source_nodes correctly. Any suggestions or fixes would be greatly appreciated.
8 comments
p
L
Hello @Logan M , @WhiteFang_Jr .

I'm building a RAG system and looking for the best URL reader in LlamaIndex to load URLs and extract text for vector storage. Any suggestions?
4 comments
p
a
W
Hi @Logan M , @kapa.ai .

So my question is can we use chat_repl() function in streamlit () to get the assistance response into my answer what the user has asked a question ...

Code :-
chat_engine = index.as_chat_engine(chat_mode="best" , verbose = True )
chat_engine.chat_repl()
main reason for using chat_repl() function is when we ask a previous question in the chat it would generate the previous question,

I am unable to generated the previous question asked by the user when I am using this code ?
Code ;-
chat_engine = index.as_chat_engine(chat_mode="best" , verbose = True )
input = " Question asked by the user ."
response = chat_engine.chat(input)

Need your assistance ..
12 comments
L
k
p