Find answers from the community

Updated 3 months ago

Hi all , I am evaluating a RAG using Ragas. Having issues with ragas integration in llam

Hi all ,
I am evaluating a RAG using Ragas. Having issues with ragas integration in llamaindex. I am using Qdrant DB.
from ragas.integrations.llama_index import evaluate
from ragas.metrics import faithfulness, answer_relevancy, context_precision, context_recall
Settings.chunk_size = chunk_size
Settings.chunk_overlap = chunk_overlap
Settings.embed_model = embed_model
Settings.llm = llm

query_engine = vector_index.as_query_engine()

# Prepare the dataset
dataset = Dataset.from_dict(ds_dict)

# Define metrics
metrics = [
faithfulness,
answer_relevancy,
context_precision,
context_recall,
]

# Evaluate using Ragas
start_time = time.time()
result = evaluate(
query_engine=query_engine,
metrics=metrics,
dataset=dataset,
llm=llm,
embeddings=embed_model,
raise_exceptions=False,
)

Throwing the same exception with LangChain also.
k
a
L
9 comments
Please stand by @ashishabraham22 as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
Based on the information provided in the knowledge sources, it seems you are trying to evaluate a RAG pipeline using the ragas library in conjunction with LlamaIndex. However, the exact error message or exception you're encountering isn't specified, which makes it difficult to provide a precise solution.

However, I can provide some general advice based on the information available:

  1. Check your LlamaIndex version and updates: There have been several updates and fixes to LlamaIndex, including fixes to the RAG fusion pipeline and Qdrant nodes (source (), source ()). Make sure you're using the latest version of LlamaIndex and that these fixes are included in your version.
  1. Check your code for correct usage of LlamaIndex and ragas: The knowledge sources provide examples of how to use LlamaIndex and ragas for evaluation (source (), source ()). Compare your code with these examples to ensure you're using the libraries correctly.
  1. Check your LLM and embedding model settings: The knowledge sources show that you need to set the LLM and embedding model in the Settings before creating the query engine (source (), source ()). Make sure you're doing this correctly in your code.
If you're still encountering issues after checking these points, it would be helpful to have more specific information about the exception you're encountering.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
ERROR:ragas.executor:Runner in Executor raised an exception
Traceback (most recent call last):
Hi @kapa.ai , this is the error message: ERROR:ragas.executor:Runner in Executor raised an exception
Traceback (most recent call last): response = await self._aclient.search(
AttributeError: 'NoneType' object has no attribute 'search'
Let me check the knowledge base for you. One moment! :books:
----------------------
The error message you're encountering indicates that the search method is being called on a NoneType object. This typically means that the object you're trying to call the method on hasn't been properly initialized.

In the context of your code, it seems like the self._aclient object is None. This object is likely responsible for handling the search queries in your vector index or database.

Here are a few things you could check:

  1. Initialization of the query engine: Ensure that the query engine (or the client object) is being correctly initialized. For instance, in the case of LlamaIndex, you might be using a method like vector_index.as_query_engine() to initialize your query engine. Make sure vector_index is not None and is properly set up.
  1. Connection to the database: If self._aclient is a client object for your database (like Qdrant DB), make sure that the connection to the database is properly established. This could involve checking your connection parameters, ensuring the database server is running, etc.
  1. Compatibility of the database with ragas and LlamaIndex: Ensure that the database you're using (Qdrant DB) is compatible with both ragas and LlamaIndex. You might need to check the documentation or reach out to the maintainers of these libraries for this information.
Unfortunately, without more specific information about your code and setup, it's difficult to provide a more precise solution. If you continue to encounter issues, I would recommend reaching out to the maintainers of ragas and LlamaIndex for more specific assistance.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Looks like you didn't pass in the async client to qdrant
QdrantVectorStore(..., client=QdrantClient(...), aclient=AsyncQdrantClient(...))
Add a reply
Sign up and join the conversation on Discord