Find answers from the community

Home
Members
Restodecoca
R
Restodecoca
Offline, last seen last week
Joined January 24, 2025
Hi, can someone help? When i try to use node post processor with query fusion retriever i don't get the citation template that create-llama --pro have, if i use just the index_retriever at chat engine i get the citations right, but if i use query fusion retriever i do not get the citations:
Plain Text
    node_postprocessors = []
    if citation_prompt:
        node_postprocessors = [NodeCitationProcessor()]
        system_prompt = f"{system_prompt}\n{citation_prompt}"

    index_config = IndexConfig(callback_manager=callback_manager, **(params or {}))
    index = get_index(index_config)
    if index is None:
        raise HTTPException(
            status_code=500,
            detail=str(
                "StorageContext is empty - call 'poetry run generate' to generate the storage first"
            ),
   )
    if top_k != 0 and kwargs.get("similarity_top_k") is None:
        kwargs["similarity_top_k"] = top_k
    index_retriever = index.as_retriever(**kwargs)
    bm25_dir = os.getenv("BM25_PATH", os.path.join(STORAGE_DIR, "bm25"))
    if os.path.exists(bm25_dir):
        bm25_retriever = BM25Retriever.from_persist_dir(bm25_dir)
        bm25_retriever.similarity_top_k = top_k
        bm25_retriever.language = "portuguese"  
    else:
        raise HTTPException(
            status_code=500,
            detail=f"BM25Retriever is empty - call 'poetry run generate' to generate the storage first"
        )
    retriever = QueryFusionRetriever(
        [index_retriever, bm25_retriever],
        similarity_top_k=top_k,
        mode="reciprocal_rerank",
        num_queries=1,
        use_async=True,
        verbose=True,

        callback_manager=callback_manager,
    )
    return CondensePlusContextChatEngine(
        llm=llm,
        memory=memory,
        system_prompt=system_prompt,
        context_prompt=context_prompt,
        retriever=retriever,
        node_postprocessors=node_postprocessors,  # type: ignore
        callback_manager=callback_manager,
    )
9 comments
L
R
R
Restodecoca
·

Semantic

Hi, can someone tell me if the semanticsplitternodeparser works like this:
4 comments
L
R
z
Hello guys, I'm having some problems indexing using chroma, when i try to index more than 52 nodes (chunk_size = 1024, chunk_overlap = 128) the program just stops after finishing the embeddings, i'm using the example provided in the site https://docs.llamaindex.ai/en/stable/examples/retrievers/bm25_retriever/#hybrid-retriever-with-bm25-chroma
2 comments
R