----------------------
Hello! It sounds like you're looking to add more interactivity to your RAG pipeline by using agents to ask follow-up questions. This is indeed a great way to refine the results and provide more precise answers.
LlamaIndex provides a lower-level agent API that allows controllable step-wise execution on top of a RAG pipeline. This can be used to add agentic capabilities to your pipeline, which can help in reasoning over more complex questions. The lack of steerability/transparency is a common issue with agents, but with LlamaIndex's agent API, you can control the execution step-by-step, which can be useful in your case to ask follow-up questions.
Here's a basic example of how you can set up an OpenAI Agent using LlamaIndex:
from llama_index.core.agent import AgentRunner
from llama_index.agent.openai import OpenAIAgentWorker, OpenAIAgent
from llama_index.agent.openai import OpenAIAgentWorker
openai_step_engine = OpenAIAgentWorker.from_tools(
query_engine_tools, llm=llm, verbose=True
)
agent = AgentRunner(openai_step_engine)
# # alternative
# agent = OpenAIAgent.from_tools(query_engine_tools, llm=llm, verbose=True)
(Source:
Jerryjliu > Llama_index > Blob > Main > Docs > Docs > Examples > Agent > Agent_runner > Controllable Agents for RAG > Setup Agent > Setup OpenAI Agent ())
However, the specific implementation of asking follow-up questions would depend on your application logic. You might need to design your agent to understand the context of the conversation and decide when to ask a follow-up question. This could involve designing a decision-making process within your agent to determine when a follow-up question is necessary, and what kind of question to ask.