Find answers from the community

Updated 5 months ago

Callback

At a glance

The community member is having an issue with the callback manager in their Django app. They are initializing the callback manager globally, but when they try to access it in a different file, it starts its own callback manager. The community member is unsure if they should be starting an instance of the token counter per API request, as they think it might conflict if multiple requests happen at the same time. A community member suggests attaching the callback manager directly to the language model and embedding model, and passing those into the modules that are using them. However, the original community member is unsure how to access the token counter and llama debug later if they do it that way.

Question on the callback manager. Currently in my app, when I start it, I initialize it with Settings.callback_manager = callback_manager where callback_manager just has, llama_debug, tokencountinghandler in Django.

  1. How can I import it correctly because when I try to use import Settings, and try to access Settings.callback_manager in a different file, it ignores what I setup globally and starts its own callback_manager.
  1. Am I suppose to be starting an instance of token_counter per API request? If 2 API requests happen at the same time, I assume it'll conflict so... maybe I can't really use Settings.callback_manager globally? πŸ€”
L
c
4 comments
Try attaching it directly to your llm and embedding model, and passing those into the modules that are using them

For example

Plain Text
llm = OpenAI(..., callback_manager=callback_manager)
embed_model = OpenAIEmbedding(...  callback_manager=callback_manager)

index = VectorStoreIndex(..., embed_model=embed_model)

query_engine = index.as_query_engine(..., llm=llm)
thanks will try that.
actually, if I do it like that. how do I access the token counter and llama debug later?
The same way you did before? πŸ‘€
Add a reply
Sign up and join the conversation on Discord