Find answers from the community

Updated 3 months ago

When using openai gpt4 turbo for llm in

When using openai gpt4 turbo for llm in llama index, if an embedding model is not specified, what embedding model is used by default for embedding?


Plain Text
chunk_sizes = [128, 256, 512]

nodes_list = []

vector_indices = []

for chunk_size in chunk_sizes:
    print(f"Chunk Size: {chunk_size}")
    splitter = SentenceSplitter(chunk_size=chunk_size, chunk_overlap=chunk_size // 2)

    nodes = splitter.get_nodes_from_documents(docs)

    for node in nodes:
        node.metadata["chunk_size"] = chunk_size
        node.excluded_embed_metadata_keys = ["chunk_size"]
        node.excluded_llm_metadata_keys = [ "chunk_size"]

        nodes_list.append(nodes)

        vector_index = VectorStoreIndex(nodes)
    vector_indices.append(vector_index)
๊ถŒ
W
8 comments
can you help this too?

Is it possible to compare and evaluate performance between retrievers?

For example, let's say I implemented an ensemble retriever and a BM25 retriever. And can you compare the performance of the two?

What I'm curious about is whether it is possible to compare and evaluate implemented retrievers.
You can compare the response in two ways in my opinion:
  • You can compare the retrieved nodes with the query and check relevancy or if you have some set of nodes that should have been returned you can compare it with that.
  • You can use GPT4 to check if the retrieved nodes are as per what query was asked or not.
Is it true that comparing and evaluating implemented retrievers is a module that is not yet supported by llamaindex?

Like response comparison evaluation
Not entirely sure but yea looks like comparing retrievals is not implemented yet.
For single retrieval evaluation, you can check this out: https://docs.llamaindex.ai/en/stable/examples/evaluation/retrieval/retriever_eval/
What is a single retriever? Can an ensemble retriever do it too?
Add a reply
Sign up and join the conversation on Discord