Find answers from the community

Updated 6 months ago

Hello, I get this error when using an

Hello, I get this error when using an index created with the text-embedding-3-large and gpt-4o as llm:
shapes (1536,) and (3072,) not aligned: 1536 (dim 0) != 3072 (dim 0)
How to fix this?
W
T
12 comments
It means that there is some mismatching between embeddings created and newly created embeddings.

I would suggest you add the following at the top:
Plain Text
from llama_index.core import Settings

Settings.llm = your llm instance
Settings.embed_model = your embed model isntance

# then proceed further
But I create the embeddings in a different repository than where my apps run
It actually seems like I cannot use gpt-4o and I was passing model_name and not model before:
Attachment
image.png
Which of these models do you recommend and is compatible with text-embedding-3-large
You need to update your llama-index openai package: pip install -U llama-index-llms-openai
Once done just pass gpt-4o as model name only. No need to add model version name
Alright thanks, going to try that
Hmm I still get the shapes error. This is how I set up the model in my app:
storage_context = StorageContext.from_defaults(persist_dir=f"{product_code}_llama") index = load_index_from_storage(storage_context) llm = OpenAI(temperature=temperature, model=GPT_MODEL, max_tokens=num_outputs) service_context = ServiceContext.from_defaults(llm=llm) engine = index.as_chat_engine( chat_mode="context", verbose=True, service_context=service_context, temperature=temperature, system_prompt=prompt, )
No need to add service_context. Just define Settings.

If you want to go with service_context. Add the embedding model in there as well
This for settings
Thank you very much, putting everything in the settings worked great!
The only problem I have now is that I want to create multiple engines and for some engines I want a lower max_tokens or a different temperature. How can I tackle this?
Add a reply
Sign up and join the conversation on Discord