Find answers from the community

Updated 3 months ago

Hello I am trying to run a multimodal

Hello I am trying to run a multimodal model using ollama (minicpm-v) I want to run this model in parallel to process the same query over multiple images at the same time, is this possible? I know that ollama has some concurrency parameters to run multiple models but I couldn't get it to work, I tried the "Parallel Execution of Same Event Example" cookbook workflow but I failed and got this error. Error during frame analysis: Ollama does not support async completion.
L
s
11 comments
hmmm looks like async hasn't been implemented yet for the multi modal ollama class
hmm I see, is there any alternative approaches I could take to speed up the process without changing the model or device
or is there any models with async implementation outside of ollama
probably the ollama llm class should just be updated to include async -- wouldn't be too crazy I think
It looks like its already using the official ollama client, and I know they have an async client, so would be a straightforward PR if you want to give it a shot ❀️
Yea exactly, you can do from ollama import Client, AsyncClient
I can try I am not sure If I am capable but πŸ˜…
Add a reply
Sign up and join the conversation on Discord