Find answers from the community

Updated 2 months ago

Take this tool tools = [ Tool( name="GPT Index", func=lambda q: str(in

Take this tool tools = [
Tool(
name="GPT Index",
func=lambda q: str(index.query(q)),
description="Always use this tool. The input to this tool should be a complete english sentence.",
return_direct=True,
),
] and integrate it into a llama tool
k
c
15 comments
Please stand by @cK5150 as I search the knowledge base. We are experiencing especially high traffic right now and might take longer than usual. :wave:
----------------------
To integrate the given tool into a Llama tool, you can use the IndexToolConfig and LlamaIndexTool classes provided by the GPT index. Here's how you can do it:

Plain Text
from gpt_index.langchain_helpers.agents import IndexToolConfig, LlamaIndexTool

tool_config = IndexToolConfig(
    index=index, 
    name="GPT Index",
    description="Always use this tool. The input to this tool should be a complete english sentence.",
    index_query_kwargs={},
    tool_kwargs={"return_direct": True}
)

llama_tool = LlamaIndexTool.from_tool_config(tool_config)


This code snippet creates a Llama tool using the given tool configuration. Make sure you have the index variable defined before using this code.

Source: Using with Langchain πŸ¦œπŸ”— (https://gpt-index.readthedocs.io/en/latest/how_to/integrations/using_with_langchain.html)
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai can this tool be used in a create_llama_agent
Let me check the knowledge base for you. One moment! :books:
----------------------
Yes, you can use this tool in a create_llama_agent. First, you need to create a LlamaToolkit with the llama_tool you created earlier. Then, you can use the create_llama_agent function to create an agent with the toolkit.

Here's an example:

Plain Text
from gpt_index.langchain_helpers.agents import LlamaToolkit, create_llama_agent

toolkit = LlamaToolkit(
    index_configs=[tool_config],
)

agent_chain = create_llama_agent(
    toolkit,
    llm,  # Make sure you have the llm (language model) variable defined
)

# Now you can use the agent_chain to run queries


This code snippet creates a LlamaToolkit with the llama_tool and then uses the create_llama_agent function to create an agent with the toolkit.

Source: Using with Langchain πŸ¦œπŸ”— (https://gpt-index.readthedocs.io/en/latest/how_to/integrations/using_with_langchain.html)
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
This agent agent_chain = create_llama_agent(
toolkit, llm=llm, agent="conversational-react-description", memory=conv_window_memory, verbose=True
) is giving me the agent prompt and the llama default prompt, why?
@kapa.ai This agent agent_chain = create_llama_agent(
toolkit, llm=llm, agent="conversational-react-description", memory=conv_window_memory, verbose=True
) is giving me the agent prompt and the llama default prompt, why?
Let me check the knowledge base for you. One moment! :books:
----------------------
The create_llama_agent function creates an agent with the specified toolkit and language model (llm). In your case, you have specified the agent type as "conversational-react-description". This agent type uses the default prompts provided by the Llama toolkit along with the agent's own prompts.

The agent prompt is used for the specific agent type you have chosen, while the Llama default prompt is used for the underlying LlamaIndex tool. Both prompts are used to generate the final response based on the context and query provided.

If you want to customize the prompts, you can create custom prompt templates and pass them to the LlamaToolkit or the agent configuration. You can refer to the Defining Prompts (https://gpt-index.readthedocs.io/en/latest/how_to/customization/custom_prompts.html) guide for more information on creating custom prompts.

Source: Using with Langchain πŸ¦œπŸ”— (https://gpt-index.readthedocs.io/en/latest/how_to/integrations/using_with_langchain.html)
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai How does using two prompts on one llm query not confuse the llm?
Let me check the knowledge base for you. One moment! :books:
----------------------
When using two prompts on one LLM query, the prompts are combined in a way that maintains the context and structure of the query. The LLM is designed to handle complex queries and understand the context provided by the prompts. The prompts are carefully crafted to ensure that the LLM can generate a meaningful response based on the given context.

In the case of LlamaIndex, the prompts are used to build the index, perform traversal during querying, and synthesize the final answer. The LlamaIndex uses a finite set of prompt types, and all index classes, along with their associated queries, utilize a subset of these prompts (source (https://gpt-index.readthedocs.io/en/latest/how_to/customization/custom_prompts.html)).

When using a custom agent with LlamaIndex, the agent prompt is combined with the LlamaIndex prompt to create a single, coherent query for the LLM. This allows the LLM to understand the context and generate a response that takes into account both the agent's requirements and the LlamaIndex tool's functionality.

In summary, the LLM is designed to handle complex queries and understand the context provided by multiple prompts. The prompts are combined in a way that maintains the context and structure of the query, allowing the LLM to generate meaningful responses.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai How do I print the combined prompt to the console
Let me check the knowledge base for you. One moment! :books:
----------------------
To print the combined prompt to the console, you can use the LLMPredictor class provided by GPT index. When you call the predict method on an instance of LLMPredictor, it returns a tuple containing the predicted answer and the formatted prompt. You can then print the formatted prompt to the console.

Here's an example:

Plain Text
from gpt_index.indices.service_context import LLMPredictor

# Assuming you have an instance of the LLM called 'llm'
llm_predictor = LLMPredictor(llm=llm)

# Assuming you have a combined prompt called 'combined_prompt'
answer, formatted_prompt = llm_predictor.predict(combined_prompt)

# Print the formatted prompt to the console
print("Formatted prompt:", formatted_prompt)


This code snippet demonstrates how to use the LLMPredictor class to obtain the formatted prompt and print it to the console.

Source: LLMPredictor (https://gpt-index.readthedocs.io/en/latest/reference/service_context/llm_predictor.html)
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord