Find answers from the community

H
Hammad
Offline, last seen 3 months ago
Joined September 25, 2024
H
Hammad
·

Hi,

Hi,
I am getting the error in the metrics = ["hit_rate", "mrr", "precision", "recall", "ap", "ndcg"] that precision, recall, ap and ndcg are the invalid mertic names. i have upgraded my llamaindex package to 0.11.14. Can you highlight what is the cause of this error?
12 comments
H
W
L
H
Hammad
·

Hi,

Hi,
I need to evaluate my RAG pipeline, I used trulens but it is giving me the import error "No module named 'trulens_eval'". I have also tried to reduce the version of trulens but it is not working. Please suggest any other way to evaluate my RAG model, furthermore I am using Gemini as LLM and tried the response evaluation of Llamaindex but it is not providing the satidfactory answer.
The packages version are;
llama-index = 0.9.8
llama-index-core=0.10.30
llama-index-embeddings-gemini =0.1.6
llama-index-llms-gemini =0.1.7
llama-index-multi-modal-llms-gemini= 0.1.5
llama-index-readers-file =0.1.19
llama-index-vector-stores-qdrant=0.2.1
llama-parse = 0.4.1
trulens=1.0.6
trulens-core=1.0.6
trulens-dashboard=1.0.6
trulens-eval = 0.19.1
trulens-feedback=1.0.6

I am open to suggestion. Also I can provide my complete code If required.
3 comments
J
H
H
Hammad
·

Hi,

Hi,

Can anyone explain the concept behind retriever in the LlamaIndex, as per my knowledge the retriever retrieve the information from the documents by matching it with the query, and as per my knowledge this matching is done my cosine similarity. Please highlight if I ma right or wrong. The RAG concept was first given in the research paper "Retrieval-Augmented Generation for
Knowledge-Intensive NLP Tasks" in the research paper 2 formulas are used for retrieving which I have attached as pictures. I want to know that LlamaIndex (retriever = index.as_retriever(similarity_top_k=3)) works on which formula.
4 comments
L
H
H
Hammad
·

Hi,

Hi,
I have an issue where my RAG code, which is not retrieving information the from the document and the document is embedded as well. What will be the cause of this issue. I have tried many things like changing chunk size, change the top_k retriever as well. The code I am running is as fellows;

pdfdocuments = SimpleDirectoryReader(r"C:\Users\Shaikh.Hammad\MS-Thesis\Data").load_data()
llm = Gemini()
embed_model=LangchainEmbedding( HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2"))
summarizer = TreeSummarize(
service_context = ServiceContext.from_defaults(
llm=llm, embed_model=embed_model
)
)
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
index=VectorStoreIndex.from_documents(pdfdocuments,service_context=service_context)
retriever = index.as_retriever(similarity_top_k=5)
p = QueryPipeline(verbose=True)
p.add_modules(
{
"input": InputComponent(),
"retriever": retriever,
"summarizer": summarizer,
}
)
p.add_link("input", "retriever")
p.add_link("input", "summarizer", dest_key="query_str")
p.add_link("retriever", "summarizer", dest_key="nodes")
output = p.run(input="What is the PSL 2024 spends")

Output response: The provided context does not mention anything about PSL 2024 spends, so I cannot answer this question from the provided context.

There is document named PSL 2024 Analysis but the model is using PSL 2023 Analysis which contain no information about 2024. Kindly help me regarding this issue that why the model is not using the 2024 document. Does it do with the embeddings?
15 comments
H
L