Find answers from the community

Updated 9 months ago

TypeError

people i am following the llama-index documentation to the word, copy/pasting the code
Plain Text
documents = SimpleDirectoryReader('Knowlege').load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine('what are these documents about?')
print(response)

but i am getting this error: TypeError: 'RetrieverQueryEngine' object is not callable
how do i fix this error?
W
S
14 comments
Can you share the doc link that you are using to follow.

what version are you using?
Where did you get this link πŸ˜…
This is a very old version. Like wayyy old

Use this: https://docs.llamaindex.ai/en/stable/index.html
This is the latest one
Make sure you spin up a new env and install llama-index
i got it from a tutorial that is just a few months old
ok let me see
ok i have reached this error, i cannot find a solution for it in the documents

Plain Text
from llama_index import LLMPredictor
how to correctly import it?
You don't have to use it. This has been deprecated as well πŸ˜…

You can directly use llm object.
Are you using openai?
Follow this tutorial. This is the latest and official documentation
yes, i am confused how can i use it directly? basically i have 5 documents and i want the LLM model to read them and emulate their content, like 5 documents about james bond and i want the chat bot to talk in the style of james bond, am I implementing the correct technique here?
This is my code so far, it is broken i am not sure how to stick the LLM to the context
Plain Text
def train(directory):
    documents = SimpleDirectoryReader(directory).load_data()
    index = VectorStoreIndex.from_documents(documents)
    index.storage_context.persist()

documents = SimpleDirectoryReader('Knowlege').load_data()

service_context = ServiceContext.from_defaults(llm_predictor=OpenAI())
storage_context = StorageContext.from_defaults(persist_dir='./storage')
index = load_index_from_storage(storage_context, service_context=service_context)
query_engine = index.as_query_engine()

response = query_engine.query('Write me a dialogue like the style from the documents I have presented you?')
print(response)
If you check the 5 line of code ( starter tutorial here: https://docs.llamaindex.ai/en/stable/getting_started/starter_example.html#starter-tutorial


It can get you started with openAI

Just install pip install llama-index
Plain Text
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
nice excellent,
Add a reply
Sign up and join the conversation on Discord