Find answers from the community

Updated 3 months ago

Hey everyone, I'm working on a RAG

Hey everyone, I'm working on a RAG project. Relevant system info - running local llm (Llama 2 13B-chat, gguf), python 3.11.5, rtx 3050 laptop, on windows 11

I'm feeding the llm textual information stored in csv and able to persist the indexes locally.
However, I'm getting an error when loading those indices.

error: ValueError: shapes (384,) and (768,) not aligned: 384 (dim 0) != 768 (dim 0)
full error log:
W
i
10 comments
Try setting the service_context globally once and then run!

Plain Text
from llama_index import set_global_service_context

# Define the service context
service_context = ServiceContext.from_defaults(
  llm=llm,
  embed_model=embed_model
)

set_global_service_context(service_context)
tried, no luck, same error
filename is given by the user
right now, for testing, after the index is generated from get_query_engine(), I'm stopping everything and running get_query_engine_from_cache() to load that index
Try setting the service context here also.
new_index = load_index_from_storage(storage_context, service_context=service_context)
WE did set the service_context globally so it should not be required tbh but give it a try. Will try to run your code at my end once.
thanks a lot @WhiteFang_Jr !!! love u bro
Add a reply
Sign up and join the conversation on Discord