The community member is asking if their data is accessible only to OpenAI and Anthropic, or if it is also shared with Llama-index, when using the guides from Llama-index and Llama-parse. The comments suggest that the data is shared with Llama-parse/Llama-index as well, even though the community member is not directly using the Llama API in one of the guides. There is a discussion around the benefits of using Llama-parse vs the other guide, and how the Llama-parse index works, with the index nodes containing both the table summary and content. The community member also asks about getting more in-depth support for complex use cases, and the response suggests asking on Discord or reading the source code.
pls accelerate the TOS drafting if possible. i have a client (a law firm) that needs to analyze a set of documents that contain confidential data, i 'd like to review the TOS first before proceeding.
The nodes list basically contains raw text (=TextNodes without parent), table definition (= IndexNodes) and table content (=TextNodes with an IndexNode as parent). And when doing base_nodes, objects = node_parser.get_nodes_and_objects(nodes), we basically disregard the table content as we keep only the TextNodes without parents (=base_nodes) and the IndexNodes (=objects).. So how come the query engine is able to answer questions about the table content while it does not have access to it ?
I don't understand why the index built as VectorStoreIndex(nodes=base_nodes+objects) is called recursive while the one built as VectorStoreIndex.from_documents(documents) is called raw (hence, not recursive). They are both of the same type. What makes the first one recursive ?
Amazing. Thanks @Logan M. Given the fast evolution of the lib and the variety of integrations you have and the usecases you address, navigating through LlamaIndex is not easy despite all the example notebooks, even as a seasoned developer. Is there any way to pay to get more in-depth support to address complex usecase and make sure we make the right choices ?