Find answers from the community

Updated 5 months ago

I have a general question about how it

I have a general question about how it works. When I started using LlamaIndex a few months ago, we had to pass a service context to vectorstoreindex.from_documents. I would pass the same openai model (lets say gpt-3.5-turbo) in this place as I would when querying.

My question is, is the first piece I described the same as embedding model? What I'm seeing as default embed model seems to be a different type of model (from gpt-3.5-turbo, gpt-4o, etc), so what do I need to know in terms of compatibility? Let's say I want to leverage gpt-4o, can I use default embed model, and only pass gpt-4o into the query engine?

TIA!
j
d
5 comments
with recent versions of llamaindex we've deprecated/removed servicecontext.

you can choose to 1) not specify the LLM or embedding model at all, 2) choose to set llm/embedding models globally, or 3) directly pass in the embedding/LLM in relevant modules

so in your example for instance you can pass gpt-4o into the query engine

e.g. check out these LLM customization docs: https://docs.llamaindex.ai/en/stable/module_guides/models/llms/usage_custom/#example-changing-the-underlying-llm
thank you @jerryjliu0! and wow, i was just watching the course on DeepLearning.AI - I feel like talking with a celebrity haha.
One other quick question: In that course you created a new index for each source. Is this usually the preferred way, over adding all documents to a single index? If you just wanted to ask a question of the data but not specify source (as in the tutorial example), it seemed like that would not be handled as well as with a single index.
yeah good question...i think the multi-document agent stuff is more exploratory - there are people using it but it's more complicated to setup. i would start by add documents to a single index, and tag them with the right metadata. you can try metadata filtering + getting the LLM to autoinfer the metadata filters, which i believe is in lesson 2

there's also a way to do multi-document agents where all the docs still live in the same index, and you dynamically construct tools around a document subset through metadata filtering, but we don't have an active example there yet
Thanks a lot!
Add a reply
Sign up and join the conversation on Discord