Find answers from the community

s
F
Y
a
P
Updated last month

Hello guys i have a pretty basic code

Hello guys, i have a pretty basic code but somethin doesn't work.
my goal is to find a person's role. this is in one of 20 documents.
if i build the index only from the single document which contains the information, my question is answered correctly.
as soon as I have 2 or more documents in the index, my response is something like "There's no information about John Doe in the context".

this is my code:

import modules

from llama_index import VectorStoreIndex, SimpleDirectoryReader
import openai
openai.api_key = 'XYZ'

build index

documents = SimpleDirectoryReader('data').load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()

ask question

question = "Who is John Doe and what's his job at the company?"
gpt_response = query_engine.query(question).response.replace(". ",".\n")
print(gpt_response)
L
d
2 comments
For this type of query, retrieving based on embeddings likely wont work well

You can increase the top k (default is 2) index.as_query_engine(similarity_top_k=3)

Or you can use a keyword index.

Or you can use a custom retriever, that combines keyword and vector indexes
https://gpt-index.readthedocs.io/en/latest/examples/query_engine/CustomRetrievers.html
Add a reply
Sign up and join the conversation on Discord