Find answers from the community

Home
Members
coder.ve
c
coder.ve
Offline, last seen 3 weeks ago
Joined September 30, 2024
Hi everyone,

Is it possible to instantiate a SummaryIndex without setting the LLM in the Settings global class? I’m looking for something similar to how it’s done in VectorStoreIndex, where I can pass the embed_model as an argument.

When I use the Settings class, everything works as expected, but this approach isn’t an option for me due to concurrency issues. In my app, I use dependency injection to handle LLM instances.
2 comments
L
c
Hello, does Llama Index Workflows support modularization out of the box (splitting workflow steps into multiple files), or does it currently require a more manual approach? For example, defining a function with all the logic and then calling it with *args and **kwargs as parameters?
6 comments
c
L
Hey guys, is it possible to force a workflow to stop? I'm working on a PoC where I need to give the user to stop the current job, which I'm planning to implement using llama index workflows. thanks in advance

Update:
Nevermind, I just find this: https://github.com/run-llama/llama_index/issues/16232 looks like it is not supported at the moment
3 comments
L
c
Hi Guys, I need to log token usage in a concurrent fastapi app, I'm using both CallbackManager and TokenCountingHandler from llama_index.core.callbacks, but setting Settings.callback_manager is causing race conditions as Settings is a global state across the app, I also can see some classes which says the service context is deprecated so Settings is the way to go now, could someone please give me some light on how can I effectively log token counting in a concurrent app?
3 comments
L
c