----------------------
To obtain logs when using
create_llama_chat_agent
, you can use the
LlamaLogger
class. First, create an instance of
LlamaLogger
and pass it to the
ServiceContext
. Then, use this
ServiceContext
when creating the
QueryEngine
. Finally, pass the
toolkit
containing the
QueryEngine
to the
create_llama_chat_agent
function. Here's an example based on the information extracts:
from llama_index.logger import LlamaLogger
from llama_index import ServiceContext
from llama_index.langchain_helpers.agents import create_llama_chat_agent, LlamaToolkit, IndexToolConfig, LlamaIndexTool
# Create LlamaLogger instance
llama_logger = LlamaLogger()
service_context = ServiceContext.from_defaults(llama_logger=llama_logger)
# Create QueryEngine with the service_context containing the LlamaLogger
query_engine = index.as_query_engine(
service_context=service_context,
similarity_top_k=2,
)
# Create LlamaIndexTool and LlamaToolkit
tool_config = IndexToolConfig(
query_engine=query_engine,
name=f"Vector Index",
description=f"useful for when you want to answer queries about X",
tool_kwargs={"return_direct": True}
)
tool = LlamaIndexTool.from_tool_config(tool_config)
toolkit = LlamaToolkit(index_configs=index_configs)
# Create the chat agent
agent_chain = create_llama_chat_agent(
toolkit,
llm,
memory=memory,
verbose=True
)
# Run the chat agent
agent_chain.run(input="Query about X")
# Get logs
logs = service_context.llama_logger.get_logs()
This code snippet sets up the
LlamaLogger
and uses it with the
create_llama_chat_agent
function. After running the chat agent, you can obtain the logs using
service_context.llama_logger.get_logs()
.