From this llmsherpa example (
https://github.com/nlmatics/llmsherpa), I am trying to run this chunk:
from llama_index.core import Document
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex(embed_model=embed_model)
for chunk in doc.chunks():
index.insert(Document(text=chunk.to_context_text(), extra_info={}))
query_engine = index.as_query_engine()
# Let's run one query
response = query_engine.query("Tell me about Europe.")
print(response)
and I get
ValueError Traceback (most recent call last)
Cell In[18], line 4
1 from llama_index.core import Document
2 from llama_index.core import VectorStoreIndex
----> 4 index = VectorStoreIndex(embed_model=embed_model)
5 for chunk in doc.chunks():
6 index.insert(Document(text=chunk.to_context_text(), extra_info={}))
...
---> 59 raise ValueError("One of nodes, objects, or index_struct must be provided.")
60 if index_struct is not None and nodes is not None:
61 raise ValueError("Only one of nodes or index_struct can be provided.")
ValueError: One of nodes, objects, or index_struct must be provided.
Is this something that has changed in the newest llamaindex version? Any ideas on how I can fix it?