Find answers from the community

Updated 2 months ago

How can I obtain the logs similar to: `service_context.llama_logger.get_logs()` but when u

How can I obtain the logs similar to: service_context.llama_logger.get_logs() but when using create_llama_chat_agent (AgentExecutor)
k
T
9 comments
Please stand by @TennisPrestigious624 as I search the knowledge base. We are experiencing especially high traffic right now and might take longer than usual. :wave:
----------------------
To obtain logs when using create_llama_chat_agent, you can use the LlamaLogger class. First, create an instance of LlamaLogger and pass it to the ServiceContext. Then, use this ServiceContext when creating the QueryEngine. Finally, pass the toolkit containing the QueryEngine to the create_llama_chat_agent function. Here's an example based on the information extracts:

Plain Text
from llama_index.logger import LlamaLogger
from llama_index import ServiceContext
from llama_index.langchain_helpers.agents import create_llama_chat_agent, LlamaToolkit, IndexToolConfig, LlamaIndexTool

# Create LlamaLogger instance
llama_logger = LlamaLogger()
service_context = ServiceContext.from_defaults(llama_logger=llama_logger)

# Create QueryEngine with the service_context containing the LlamaLogger
query_engine = index.as_query_engine(
    service_context=service_context,
    similarity_top_k=2,
)

# Create LlamaIndexTool and LlamaToolkit
tool_config = IndexToolConfig(
    query_engine=query_engine, 
    name=f"Vector Index",
    description=f"useful for when you want to answer queries about X",
    tool_kwargs={"return_direct": True}
)
tool = LlamaIndexTool.from_tool_config(tool_config)
toolkit = LlamaToolkit(index_configs=index_configs)

# Create the chat agent
agent_chain = create_llama_chat_agent(
    toolkit,
    llm,
    memory=memory,
    verbose=True
)

# Run the chat agent
agent_chain.run(input="Query about X")

# Get logs
logs = service_context.llama_logger.get_logs()


This code snippet sets up the LlamaLogger and uses it with the create_llama_chat_agent function. After running the chat agent, you can obtain the logs using service_context.llama_logger.get_logs().
For more information on LlamaLogger, refer to the LlamaLogger documentation (https://gpt-index.readthedocs.io/en/latest/reference/service_context/llama_logger.html).
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai does that means that if the tool is not used for answering the query the logs will be empty?
Let me check the knowledge base for you. One moment! :books:
----------------------
Yes, that's correct. If the tool is not used for answering the query, the logs will be empty because the LlamaLogger is associated with the specific QueryEngine and ServiceContext used in the tool. If the tool isn't used, the LlamaLogger won't capture any logs related to the query.

To ensure that the logs are captured, make sure the tool is used when answering the query.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai how can I ensure the tool is used to answer 100% of the queries?
Let me check the knowledge base for you. One moment! :books:
----------------------
To ensure that the tool is used to answer 100% of the queries, you can create a single LlamaToolkit containing the LlamaIndexTool and pass it to the create_llama_chat_agent function. This way, the chat agent will use the provided toolkit for all queries. Here's an example based on the information extracts:

Plain Text
from llama_index.langchain_helpers.agents import create_llama_chat_agent, LlamaToolkit, IndexToolConfig, LlamaIndexTool

# Create QueryEngine
query_engine = index.as_query_engine()

# Create LlamaIndexTool and LlamaToolkit
tool_config = IndexToolConfig(
    query_engine=query_engine, 
    name=f"Vector Index",
    description=f"useful for when you want to answer queries about X",
    tool_kwargs={{"return_direct": True}}
)
tool = LlamaIndexTool.from_tool_config(tool_config)
toolkit = LlamaToolkit(index_configs=[tool_config])

# Create the chat agent
agent_chain = create_llama_chat_agent(
    toolkit,
    llm,
    memory=memory,
    verbose=True
)

# Run the chat agent
agent_chain.run(input="Query about X")


In this example, the LlamaToolkit contains only the LlamaIndexTool created with the desired QueryEngine. By passing this toolkit to the create_llama_chat_agent function, the chat agent will use the provided toolkit for all queries.

For more information on creating a chat agent, refer to the Langchain Integrations documentation (https://gpt-index.readthedocs.io/en/latest/reference/langchain_integrations/base.html).
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord