Is there a limit on the amount of documents and text we can feed llama index with ? If I give llama_index the whole text of Victor Hugo's Les Misérables, will it be able to digest it properly ? (513 000 words)
If you just want to do general QA, your best bet is a vector index, and set the chunk_size_limit in the service_context and prompt_helper to ~1024.
Then in your query, you can do something like index.query("my query", similarity_top_k=3, response_mode="compact"), should work well.
One last consideration is this will make the index.json quite large. This is because all the embedding vectors are loaded into memory and also saved to disk (each vector is 1536 numbers per node). You might want to consider using a vector store integration like weaviate or qdrant, especially if your computer only has like 8GB of RAM lol. If your computer is a powerhouse, then it's totally fine