how can I obtain the total token usage like: llm_predictor.last_token_usage but when using an AgentExecutor created from the create_llama_chat_agent function?
@kapa.ai How can I pass a llama_debug = LlamaDebugHandler(print_trace_on_end=True) callback_manager = CallbackManager([llama_debug]) to the create_llama_chat_agent functino?
Which prompt template would it be best to use for an option picking task, "Given these options, and this message, pick the option that best fits the intent of the message". I think the default prompt template is guiding the LLM into try to answer the question instead of just picking a category