Find answers from the community

Updated 3 months ago

Several sources

Guys hey!

I have used LangChain and LLama as a indexes and notice one major difference: Langchain give me answer based on several sources (not only one document) and return me these sources. Llama always give me response based on one single document. Is it possible to the same as Langchain has? I mean answers based on several sources?
L
k
10 comments
Yea of course!

By default, if you are using GPTSimpleVectorIndex, it only uses the single closest matching text chunk to create an answer.

You can increase this, try something like
index.query(..., similarity_top_k=3, response_mode="compact")
If you aren't using a vector index, let me know! Lol
Then how to implement to create response based on several documents? @Logan M
Just the example I gave there above, increase the similarity top k πŸ‘
by default it use 1? @Logan M
Yup! Try setting it to 3 maybe, and see what happens πŸ‘
oh, super helpful! Love your explanations:)
Do you mind if I ask one more questions?)

Because I could not find it in docs.

How it actually works with metadata?

for example I have a lot of metadata as a columns.

When I use index it consider my metadata? @Logan M

Can I ask questions based on metadata or something like that?
It should be considering the Metadata yes.

I'm not sure how you are loading your data, but you can always inspect what it looks like before creating the index

print(documents[0].get_text()) should print the text from the first document, assuming you have a list of document objects from a data loader πŸ‘

Llama Index will read everything in there
oh! Thank you so much:) got it:)
Add a reply
Sign up and join the conversation on Discord