Find answers from the community

s
F
Y
a
P
Updated last month

1) Updated SimpleNodeParser to

1) Updated SimpleNodeParser to SentenceSplitter. Is that okay?
2) Reader/Tool not found: SimpleCSVReader - what do I need to do to fix this?
3) Ran without installing these libraries and got errors so I guess I need to install them separately?
1
L
A
d
8 comments
yea, SimpleNodeParser was technically removed, but aliased to SentenceSplitter

SimpleCSVReader does not exist in the codebase (you must be coming from an ancient version? lol). Try from llama_index.readers.file import CSVReader maybe?

Yes, the cli-upgrade tool will print the libraries you should install
Thanks Logan! I was using download_loader() with SimpleCSVReader and ImageReader. Does that mean I can't use download_loader() anymore and I need to replace them with 0.10 imports (and maybe need to install integrations)?
download loader should still work (Although I think SimpleCSVReader is no longer a thing?)

The preffered thing to do is install and import them. A bit safer than installing things on the fly anyways πŸ˜‰

https://llamahub.ai has all the intergrations/packages/imports
in this case
pip install llama-index-readers-file (included in pip install llama-index though)

from llama_index.readers.file import ImageReader, CSVReader
Sounds good! Thanks a lot, @Logan M boss! πŸ˜„
@kapa.ai What's the fastest index to generate, for a given set of documents and chunk size?
Let me check the knowledge base for you. One moment! :books:
----------------------
Hmm, I don't know enough to give you a confident answer yet. The speed of generating an index can depend on various factors such as the size and complexity of the documents, the chunk size, the specific hardware and software environment, and the specific implementation of the indexing algorithm.

However, the LlamaIndex documentation provides some guidance on how to create and optimize indexes. For example, you can customize the chunk size and chunk overlap when creating an index, which can potentially affect the speed of index generation. Here is an example of how to create a VectorStoreIndex with a custom chunk size:

Plain Text
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.core import Settings

documents = SimpleDirectoryReader("./data").load_data()

Settings.chunk_size = 512
Settings.chunk_overlap = 50

index = VectorStoreIndex.from_documents(
    documents,
)

query_engine = index.as_query_engine(similarity_top_k=4)

(Source: LlamaIndex Documentation ())

For more detailed information on optimizing LlamaIndex, you may want to refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord