Find answers from the community

Updated 2 months ago

Nodes

Hello!
After updating LlamaIndex I face the problem that get_nodes_from_documents doesn't return NodesWithEmbeddings but just TextNodes. How can I create a VectoreStoreIndex without them? Thanks

Plain Text
        node_parser = SimpleNodeParser.from_defaults(chunk_size=chunk_size, chunk_overlap=20)
        content_nodes = node_parser.get_nodes_from_documents([document]) 
        index = VectorStoreIndex(content_nodes, storage_context=storage_context, service_context=service_context)
L
S
15 comments
get_nodes_with_from_documents has always returned textnode objects.

The code you have there should work fine no?
It's not working because inside the store file here is the code:
Plain Text
        for result in embedding_results:
            # NOTE: keep text in metadata dict since there's no special field in
            #       Supabase Vector.
            metadata_dict = node_to_metadata_dict(
                result.node, remove_text=False, flat_metadata=self.flat_metadata
            )

            data.append((result.id, result.embedding, metadata_dict))
            ids.append(result.id)

and now the result is TextNode and doesn't have a node. (https://github.com/cbmchat/llama_index/blob/949bbb3e4d7bd6ff8a52ef0e5b393de2a34c1baf/vector_stores/supabase.py)
Ahh I see you already updated the method
This is the old, forked one. So is mine
Yes, it should be fine in newer versions of llama index πŸ‘Œ
Unfortunately, I have to use my own version because the default version is not working for me. I have to make this call with an additional parameter skip_adapter else it won't work when the list of nodes is big
Just curious, what's this issue you had to fix yourself?
Do you mean here, with this skip_adapter parameter?
I found when the list of nodes is big the process is hanging there and never comes out. Only, if I use
Plain Text
 self._collection.upsert(records=data, skip_adapter=True)
it's working. I have no idea why.
No ways to debug because the process stops somewhere inside the binaries.
Huh, well thats fun haha
Well, I wouldn't call it fun, really. It was very hard to find and fix 😦 and I felt a lot of frustration
And now, it's not good anyway because every change in LlamaIndex and boom - nothing is working
It should be as simple as updating your fork and adding your fix back
Yeah, this is exactly what I did. The problem is it took some time to remember what exactly I changed as it was 5 months ago but apparently it's not your problem πŸ˜†
Add a reply
Sign up and join the conversation on Discord