Find answers from the community

Updated 2 months ago

Ingesting and Querying Large Document Collections with LlamaIndex

At a glance

The community member is trying to find a way to pass a large number of documents to llamaindex so they can ask their large language models (LLMs) to answer questions about them. The comments suggest starting with the llamaindex documentation, but the community member is encountering issues with installing and using the library. They have provided a sample Python script that is not working due to import errors. The community members discuss potential solutions, such as using a different Python version and correcting the imports, but there is no explicitly marked answer.

Useful resources
I try to find a way to pass a big amount fo documents to llamaindex, so I could ask my LLMS to answer questions about them
W
m
14 comments
Yes, You are right
hey ! thanks for creating this thread
I've installed it using the instructions but I'm getting an error so far :
Plain Text
`
(llama_env) meaning@Ottis llamaindex_project % pip list | grep llama-index
llama-index                             0.12.2
llama-index-agent-openai                0.4.0
llama-index-cli                         0.4.0
llama-index-core                        0.12.2
llama-index-embeddings-openai           0.3.1
llama-index-indices-managed-llama-cloud 0.6.3
llama-index-legacy                      0.9.48.post4
llama-index-llms-openai                 0.3.2
llama-index-multi-modal-llms-openai     0.3.0
llama-index-program-openai              0.3.1
llama-index-question-gen-openai         0.3.0
llama-index-readers-file                0.4.0
llama-index-readers-llama-parse         0.4.0
(llama_env) meaning@Ottis llamaindex_project % nano query_documents.py
(llama_env) meaning@Ottis llamaindex_project % python query_documents.py
Traceback (most recent call last):
  File "/Users/meaning/llamaindex_project/query_documents.py", line 2, in <module>
    from llama_index import SimpleDirectoryReader, VectorStoreIndex
ImportError: cannot import name 'SimpleDirectoryReader' from 'llama_index' (unknown location)
`
I've create a short python script :
Plain Text
`
import requests
from llama_index import SimpleDirectoryReader, VectorStoreIndex
from llama_index.llms.ollama import Ollama

# Configure Ollama as the backend LLM
ollama = Ollama(
    model="llama3:8b-instruct-q6_K",  # Specify the model
    base_url="http://127.0.0.1:11434"  # Local URL for Ollama
)

# Load the documents
documents_path = "/Users/meaning/Downloads/PES"
documents = SimpleDirectoryReader(documents_path).load_data()

# Create an index with Ollama
index = VectorStoreIndex.from_documents(documents, llm=ollama)

# Query the index
query_engine = index.as_query_engine()
response = query_engine.query("What are the key points mentioned in my documents?")
print(f"Ollama response: {response}")
`
but output is :
Plain Text
`
(llama_env) meaning@Ottis llamaindex_project % python query_documents.py
Traceback (most recent call last):
  File "/Users/meaning/llamaindex_project/query_documents.py", line 2, in <module>
    from llama_index import SimpleDirectoryReader, VectorStoreIndex
ImportError: cannot import name 'SimpleDirectoryReader' from 'llama_index' (unknown location)
`
is there no project that actually enables me to install llamaindex inside a container, get a nice webui to link my files, and link it to my running ollama ?
that would be fantastic
ok it looks like it didn't like python 3.13... works better with 3.11
The imports are not correct:
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
Add a reply
Sign up and join the conversation on Discord