- Customizing Storage: LlamaIndex supports a swappable storage layer that allows you to customize where ingested documents (i.e.,
Node
objects), embedding vectors, and index metadata are stored. This can potentially increase the loading speed of document searches. source ()
Here's an example of how you can use LlamaIndex to query your data:
from llama_index import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader('data').load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("Summarize the documents.")
Remember to replace
'data'
with the path to your documents and
"Summarize the documents."
with your query.
----------------------
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:
----------------------