Find answers from the community

Home
Members
ndaugreal
n
ndaugreal
Offline, last seen 3 months ago
Joined September 25, 2024
Hi, I am trying to run this notebook:

https://github.com/run-llama/llama_index/blob/main/docs/examples/query_engine/multi_doc_auto_retrieval/multi_doc_auto_retrieval.ipynb

It gives me a TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' when I tried to replace the Weaviate Vector Store with the local Vector Store.

What I did is replacing:
Plain Text
vector_store = WeaviateVectorStore(
    weaviate_client=client, index_name=class_name
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)

vector_store = WeaviateVectorStore(
    weaviate_client=client, index_name=doc_class_name
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)

doc_index = VectorStoreIndex.from_documents(
    docs, storage_context=storage_context
)


With the following:

Plain Text
index = VectorStoreIndex.from_documents(new_docs)

doc_index = VectorStoreIndex.from_documents(docs)
4 comments
n
L
n
ndaugreal
·

Nodes

Hi I was trying to run this notebook https://github.com/run-llama/llama_index/blob/main/docs/examples/query_engine/multi_doc_auto_retrieval/multi_doc_auto_retrieval.ipynb it used to work, but with the latest version I got

Plain Text
TypeError: Object of type VectorIndexRetriever is not JSON serializable

During handling of the above exception, another exception occurred:
...
ValueError: IndexNode obj is not serializable: <llama_index.core.indices.vector_store.retrievers.retriever.VectorIndexRetriever object at 0x16b443550>


for this cell:

Plain Text
# Since "index_nodes" are concise summaries, we can directly feed them as objects into VectorStoreIndex
index = VectorStoreIndex(
    objects=index_nodes, storage_context=storage_context_auto
)


Is this a bug?
2 comments
L
Hi, if we have a list of md files in a directory, and we want to retrieve a selected few of those files (based on a query), and supply those files as context for a prompt. What is the best way to achieve this in LlamaIndex? (If we do normal simple_directory_reader.load_data and VectorStoreIndex.from_documents, we will retrieve Node chunks, not the entire file. Do we have to manually put each file into a Node, or are there better ways?) Thanks
12 comments
n
L
W