Find answers from the community

Updated 8 months ago

Hi everyone, i am confused how to find

Hi everyone, i am confused how to find out which emedding model is getting used and which llm is getting used when doing this
How to find out

index1 = VectorStoreIndex.from_documents(
documents1, storage_context=storage_context1
)
query_engine1 = index1.as_query_engine()
L
l
4 comments
If you didn't change the settings object, and didn't pass in an embed model or llm, then it uses gpt-3.5-turbo for the LLM and text-embedding-ada-002 for the embedding model
where i should pass gpt-4 if i want to use it
Either change the global settings

Plain Text
from llama_index.core import Settings

Settings.llm = OpenAI(model="gpt-4", ...)


Or into the query engine

Plain Text
index.as_query_engine(llm=OpenAI(...))
Thank you so much Logan!
Add a reply
Sign up and join the conversation on Discord