Find answers from the community

Updated 9 months ago

Vector Store

I am currently using this type of vector storage:
index = GPTVectorStoreIndex.from_documents(
documents, service_context=service_context
)
index.storage_context.persist(persist_dir=index_name)

But when I upload files of about 40 MB, everything is indexed for a very long time and the response takes up to 2-3 minutes. Can Weaviate solve this problem?
W
A
2 comments
Yeah it will solve this problem πŸ’ͺ
How do I integrate weaviate here.... My brain is already boiling. The functions described in the llamaindex documentation just don't work. The advice from kappa is very bad... I don't understand how to save embeddings. I want to keep the logic the same, so that only vectors work in weaviate:


if business_unit.max_tokens:
llm = OpenAI(model=business_unit.gpt_model, temperature=temperature,
max_tokens=business_unit.max_tokens)
else:
llm = OpenAI(model=business_unit.gpt_model, temperature=temperature)

service_context = ServiceContext.from_defaults(
llm=llm,
system_prompt=business_unit.system_prompt,
chunk_size=business_unit.chunk_size if business_unit.chunk_size else None,
chunk_overlap=business_unit.chunk_overlap if business_unit.chunk_overlap else None
)
if os.path.exists(index_name):
index = load_index_from_storage(
StorageContext.from_defaults(persist_dir=index_name),
service_context=service_context,
)
else:
documents = SimpleDirectoryReader(documents_folder).load_data()
index = GPTVectorStoreIndex.from_documents(
documents, service_context=service_context
)
index.storage_context.persist(persist_dir=index_name)

query_engine = index.as_query_engine(
similarity_top_k=business_unit.similarity_top_k if business_unit.similarity_top_k else 1
)
response = query_engine.query(query_text)
Add a reply
Sign up and join the conversation on Discord