I am trying to use llamaparse in an IngestionPipeline but can't get to have both Nodes and IndexNodes returned by the pipeline. How can I do that? Thanks!
Hi! I am on llamaindex latest version and when doing one of the examples, https://docs.llamaindex.ai/en/stable/examples/retrievers/composable_retrievers.html, I receive an error: ValueError: IndexNode obj is not serializable: <llama_index.core.indices.vector_store.retrievers.retriever.VectorIndexRetriever object at ... . My vector_index is loaded from an WeaviateVectorStore. Can you please help? Thanks!
Ha! It seems kapa.ai is really helpfull. The problem seems to come from the fact that Settings.context_window is initially set to 4096 (maybe because of gpt3.5). After manually setting it the issue is gone. Thanks @kapa.ai ! 🙂