Find answers from the community

M
MrMJ
Offline, last seen 3 months ago
Joined September 25, 2024
Hi, when I use storage_context with supabase, I can't get the 'index.docstore.docs.values()'. It's empty. It works when I tried without storage_context, it outputs the nodes.

Plain Text
node_parser = SimpleNodeParser.from_defaults(chunk_size=250)
embed_model = OpenAIEmbedding()
vector_store = SupabaseVectorStore(postgres_connection_string=DB_CONNECTION, collection_name='index_main', dimension=1536)
service_context = ServiceContext.from_defaults(embed_model=embed_model, node_parser=node_parser)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
documents = Document(text="I like to eat apples. ")

index = VectorStoreIndex.from_documents([documents], service_context=service_context, storage_context=storage_context)

print(index.docstore.docs.values())
# output => dict_values([])
3 comments
M
L
M
MrMJ
·

Index

Hi, is there a way to merge two pre defined indexes into one new index?
3 comments
M
L
W
Hi, I'm testing supabase vectorstore, but have an issue with deleting nodes.

vector_store = SupabaseVectorStore( postgres_connection_string=DB_CONNECTION, collection_name='test', dimension=1536 ) index = VectorStoreIndex.from_vector_store(vector_store=vector_store) index.delete_ref_doc(doc_id)

It deletes only 10 rows of vectors, not all associated with doc_id.
I'm using llama_index version 0.8.68
3 comments
M
L
Hi! Is there a way to use 'query_engine_tools' and custom tools (e,g. 'add' , 'multiply') simultaneously for the openai agent? For example, how can I add my custom tools to the below code?

Plain Text
context_agent = ContextRetrieverOpenAIAgent.from_tools_and_retriever(
    query_engine_tools,
    context_index.as_retriever(similarity_top_k=1),
    verbose=True,
)

Thanks!
2 comments
M
L
M
MrMJ
·

Access

Hi! OpenAI just announced newer version of gpt4 ('gpt-4-1106-preview') that we can use it now through openai API. Could we have a quick access it through llama-index?
14 comments
N
L
K
M
Hi, is there a way that we can efficiently save and load 'storage_context' below?

storage_context = StorageContext.from_defaults()
1 comment
b
Hi, I'm trying to save an additional index into the same persist directory, but it seems like the pre-existed index is replaced with te new index. Here's my code:

index.set_index_id("id1")
index.storage_context.persist(persist_dir="./test_storage")

index.set_index_id("id2")
index.storage_context.persist(persist_dir="./test_storage")


I can only load the index ("id2"), but not the first index ("id1"). The first one is deleted. How can I save the multiple index? Thanks!
9 comments
M
L
S
M
MrMJ
·

Doc ids

the Hi! Trying to use VectorIndexRetriever to retrieve the text from the index. My index has several doc_ids, but when I pass the specific doc_ids, no matter trying different doc_ids, it's not fetching and still outputs from entire index.
4 comments
L
M
Hi, happy Black Friday! Quick question about using SupabaseVectorStore for indexing: I've noticed that when I retrieve nodes, the node that appears to be the most relevant consistently has the lowest score. Does this mean that in SupabaseVectorStore, lower scores indicate higher relevance? Just trying to understand the scoring system better. Thanks!
4 comments
M
L
@kapa.ai How can I delete the documents from the vector store index? I used 'index.delete_ref_doc()', but didn't remove anything.
2 comments
k
Hi, I am currently in the process of configuring Elasticsearch to function (from llama_index.vector_stores import ElasticsearchStore) as my local database. During this process, I encountered an issue pertaining to the utilization of the embedding model when saving and loading the index locally.

To elaborate, I am defining a specific open-source embedding model to be used when saving the index locally. However, upon loading this index, the system seems to be not retaining the use of the initially defined open-source embedding model. Instead, it automatically defaults to utilizing the OpenAI embedding model. This causes an issue cause the length (and content) of vector between the saved index and my query is different.

Could you provide any insights or guidance on how to resolve this issue?
4 comments
M
L