Find answers from the community

Updated 9 months ago

hi Guys, I am trying to use

hi Guys, I am trying to use QueryFusionRetriever with MultiModalVectorStoreIndex. I am using azure openai for embedding and llm. However, getting this error -
Attachment
Screenshot_2024-04-08_at_10.54.24_AM.png
W
P
L
8 comments
Try adding it to Settings once or passing the llm object in your Retriever directly

Plain Text
from llama_index.core import Settings
Settings.llm=llm
Settings.embed_model = embed_model


# Passing it to retriver directly
retriever = QueryFusionRetriever(llm=llm,...)
getting this error
Attachment
Screenshot_2024-04-08_at_3.06.28_PM.png
even when trying Settings.llm = azure_openai_mm_llm, getting similar error
@WhiteFang_Jr any update?
@WhiteFang_Jr @Logan M ?
Idk man πŸ€·β€β™‚οΈ seems like a bug (and previously a todo)

https://github.com/run-llama/llama_index/blob/2f2d5a4735dd82f8acd2f630c745758e92202be2/llama-index-core/llama_index/core/multi_modal_llms/base.py#L73

Tbh all the multimodal stuff needs a huge overhall-- so much code duplication
should I raise an issue?
Probably, can't say how quickly I can get to it. I definitely welcome a PR
Add a reply
Sign up and join the conversation on Discord