Find answers from the community

Updated 3 months ago

TheHeroShep (@TheHeroShep) on X

Hey y'all πŸ‘‹
Been a long time fan so we tried to start getting some support for LlamaIndex in comfyUI.
Got to bridge the gap between LLM's and other genAI!!

https://x.com/TheHeroShep/status/1767652590127661357?s=20
j
A
M
12 comments
is there a link for usrs that they can check out?
For Settings["llm"] we have a complication of sorts.
The Settings is a singleton, and we're wanting to be able to use a variety of models as a group, with stuff potentially being passed back and forth.
We can code around it, but we figured you guys may want to review that possibility in the design within llama-index.
But basically, we're going to be shoving whichever LLM we're wanting to use llama-index with into that slot and keeping track externally.
Have you guys thought about how the global 'settings' might be handled in a node based system @jerryjliu0 ?
we're thinking:
Between ComfyUI nodes, clear Settings["llm"] values and only assign it within a node to allow llama-index to do its work. This ensures ComfyUI can switch between nodes with different models.

Within any ComfyUI node, Settings["llm"] can be set to allow llama-index to perform tasks.

Passed "llm" values between nodes need to be able to be used to pull up whatever model is applicable, ideally without forcing models to reload unnecessarily, but also being able to unload and reload. This problem space is going to need further elaboration down the line.

Currently the "llm" value passed between nodes in ComfyUI is a string, hoping to use it as an entry into a dict of loaded models, but if the context of the same model is meant to be different in different places there could be clash.
Especially facing Async or non-directed graphs it becomes increasingly complex.
But for now we're okay, I just wanted to inform in case it helps.
Add a reply
Sign up and join the conversation on Discord