Find answers from the community

s
shri
Offline, last seen 6 months ago
Joined September 25, 2024
s
shri
·

Top k

I am just getting started with RAG and LlamaIndex and trying to learn it.

I have 3 text files with text: Product is chair, Product is table, Product is ball respectively.

Even with fairly simple setup of LlamaIndex, if I use the prompt List all products, I only get back 2 out of the 3 products.

My code is as follows:

Plain Text
llm = OpenAI(model="gpt-4-0125-preview")
service_context = ServiceContext.from_defaults(llm=llm)
documents = SimpleDirectoryReader(dir).load_data()
if not os.path.exists(PERSIST_DIR):
    index = VectorStoreIndex.from_documents(documents, service_context=service_context)
    index.storage_context.persist(persist_dir=PERSIST_DIR)
text_qa_template_str = (
    "Context information is"
    " below.\n---------------------\n{context_str}\n---------------------\nUsing"
    " both the context information and also using your own knowledge, answer"
    " the question: {query_str}\n")
text_qa_template = PromptTemplate(text_qa_template_str)
query_engine = RetrieverQueryEngine.from_args(
    retriever, response_mode="compact_accumulate", text_qa_template=text_qa_template, similarity_top_k=10)


What am I doing wrong?
1 comment
L
Hey folks.. I have a question about the expected behavior of LlamaIndex when the answer to a query is NOT part of the context provided. I have a file with a bunch of product information (pricing, description, etc.). I use LlamaIndex + OpenAI for being able to ask it questions and get back very accurate results.

Sometimes, I want to ask questions whose answers are not in the provided files/data. But OpenAI certainly has the infirmation ... But, I don't get these answers. I get back a response saying that the information is NOT part of the context provided.

Is there a way to get over this? Right now, i don't do any prompt engineering. I simply send the user provided question as is to the model.
7 comments
s
L