Find answers from the community

Home
Members
TennisPrestigious624
T
TennisPrestigious624
Offline, last seen 2 months ago
Joined September 25, 2024
how can I obtain the total token usage like: llm_predictor.last_token_usage but when using an AgentExecutor created from the create_llama_chat_agent function?
2 comments
k
How can I obtain the logs similar to: service_context.llama_logger.get_logs() but when using create_llama_chat_agent (AgentExecutor)
9 comments
k
T
when using the create_llama_chat_agent is there a way to specify a system message?
2 comments
k
Can you give an example using Llama index and querying a single GPTVectorStoreIndex using gpt-3.5-turbo and using memory
13 comments
T
L
k
@kapa.ai How can I pass a
llama_debug = LlamaDebugHandler(print_trace_on_end=True)
callback_manager = CallbackManager([llama_debug])
to the create_llama_chat_agent functino?
5 comments
k
T
@kapa.ai How can I define an StructuredOutputParser using the create_llama_chat_agent function?
3 comments
T
k
Which prompt template would it be best to use for an option picking task, "Given these options, and this message, pick the option that best fits the intent of the message".
I think the default prompt template is guiding the LLM into try to answer the question instead of just picking a category
5 comments
t
L
T