Find answers from the community

Updated 5 months ago

Global Settings

from llama_index.core import Settings
Settings.embed_model = "my embedding model"

I fixed it through above. Still, is there any other way. Please let me know
W
1 comment
Yes this is a way of defining global llm and embedding model.
You can also pass in the llm and embed model directly too.

index=VectorStoreIndex.from_documents(...,llm=llm,embed_model=embed_model)
Add a reply
Sign up and join the conversation on Discord