Find answers from the community

Updated last year

how can I force an agent created with

how can I force an agent created with create_llama_chat_agent to use only the context information?
L
t
k
20 comments
There's two levels to this
  1. The agent, which interprets the user message, decides if a tool needs to be used, and interprets the tool response (if the tool is not return_direct)
  1. The query engine, which operates over the data in your index
I would customize the prompt templates for the query engine(s). Examples here
https://gpt-index.readthedocs.io/en/latest/core_modules/model_modules/prompts.html#modules
so you might have to add more details to the prompt template, telling it to only use the context
it takes quite a lot of fiddling around, especially because the default prompts already tell it to not use outside knowledge
unless your question is to force the agent to call a tool?
My agent is using a query engine and it doesn't have any additional tools etc. The primary reason I am using agent is for chat history.
I am customizing the text_qa_template just like it is in the example, but still i am not getting what I want
customizing the qa and refine template is basically the only way

Like I said, it may take a lot of iterating :PSadge:
@Logan M when i use just the query engine, I can force it to use only the context information, but when I am using agents, it's not possible. What should I do?
Looks like the agent is not using the prompt I am supplying to the underlying query engine
Right, the agent has its own prompt, the query engines are separate
How are you creating an agent right now?
Plain Text
    llm=OpenAI(temperature=0)
    memory = ConversationBufferMemory(memory_key="chat_history")
    
    tool_config = IndexToolConfig(
        query_engine=engine, 
        name=f"Index",
        description=f"index",
        tool_kwargs={"return_direct": True, }
    )
    index_configs = [tool_config]
    toolkit = LlamaToolkit(
        index_configs=index_configs ,
    )

    agent_chain = create_llama_chat_agent(
        toolkit,
        llm,
        memory=memory,
        verbose=True,
        
    )
@Logan M it's directly from the documentation i think
Yea. So this uses langchain under the hood. You can add an agent_kwargs={..} dict to pass extra config to the agent. I thiiiink the PREFIX key controls the agent prefix, but tbh langchain doesn't make this easy to figure out.

Tbh I would advocate for our recently released agents. Much easier to use and configure. With our agents, setting the system prompt is simple

In this example, you can pass in a system prompt string with extra instructions on how to act

https://gpt-index.readthedocs.io/en/latest/examples/agent/openai_agent.html#agent-with-personality
Thank you very much
@kapa.ai how to use OpenAiAgent with index?
Let me check the knowledge base for you. One moment! :books:
----------------------
To use OpenAIAgent with an index in LlamaIndex, you need to import the OpenAIAgent from llama_index.agent and then initialize it with the tools you want to use. Here's an example of how to do it:

Plain Text
from llama_index.agent import OpenAIAgent

agent = OpenAIAgent.from_tools([multiply_tool, add_tool], verbose=True)


In this example, multiply_tool and add_tool are the tools you want to use with the agent. You would replace these with the tools you want to use from your index.

After initializing the agent, you can start a chat with the agent using the chat_repl() method:

Plain Text
agent.chat_repl()


This information is based on the extract from this source ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@Logan M you are awesome. I had already figured it out though
Add a reply
Sign up and join the conversation on Discord