Find answers from the community

Home
Members
coder.ve
c
coder.ve
Offline, last seen 2 months ago
Joined September 30, 2024
Hello, does Llama Index Workflows support modularization out of the box (splitting workflow steps into multiple files), or does it currently require a more manual approach? For example, defining a function with all the logic and then calling it with *args and **kwargs as parameters?
6 comments
c
L
Hey guys, is it possible to force a workflow to stop? I'm working on a PoC where I need to give the user to stop the current job, which I'm planning to implement using llama index workflows. thanks in advance

Update:
Nevermind, I just find this: https://github.com/run-llama/llama_index/issues/16232 looks like it is not supported at the moment
3 comments
L
c
Hi Guys, I need to log token usage in a concurrent fastapi app, I'm using both CallbackManager and TokenCountingHandler from llama_index.core.callbacks, but setting Settings.callback_manager is causing race conditions as Settings is a global state across the app, I also can see some classes which says the service context is deprecated so Settings is the way to go now, could someone please give me some light on how can I effectively log token counting in a concurrent app?
3 comments
L
c