----------------------
Apologies for the confusion in my previous response. You're right, I didn't pass the
service_context
to the
create_llama_chat_agent
function. Unfortunately, the provided extracts do not contain information about a
create_llama_chat_agent
function that accepts a
service_context
parameter.
However, you can still use the
LlamaDebugHandler
and
CallbackManager
with a chat engine like
SimpleChatEngine
or
ReAct
mode. Here's an example using
SimpleChatEngine
:
- First, create the
LlamaDebugHandler
and CallbackManager
:
from llama_index.callbacks import CallbackManager, LlamaDebugHandler
llama_debug = LlamaDebugHandler(print_trace_on_end=True)
callback_manager = CallbackManager([llama_debug])
- Set up a
ServiceContext
with the callback_manager
:
from llama_index import ServiceContext
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0)
service_context = ServiceContext.from_defaults(llm=llm, callback_manager=callback_manager)
- Create a
SimpleChatEngine
with the service_context
:
from llama_index.chat_engine import SimpleChatEngine
chat_engine = SimpleChatEngine.from_defaults(service_context=service_context)
Now you can use the
chat_engine
to interact with the model, and the
LlamaDebugHandler
and
CallbackManager
will be used during the process.
For more information on using
SimpleChatEngine
, refer to the
documentation (
https://gpt-index.readthedocs.io/en/latest/examples/chat_engine/chat_engine_repl.html).
----------------------
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:
----------------------