Find answers from the community

Updated 8 months ago

Hello. Can I use node postprocessors

Hello. Can I use node postprocessors like this?
Plain Text
Settings.embed_model = BertEmbeddings()
# <....>
ce = index.as_chat_engine(
                chat_mode="condense_plus_context",
                memory=memory,
                context_prompt=(
                    "An arxiv paper/document has been attached, take the help of its context to respond."
                    "Here are the relevant contexts of the paper:\n"
                    "{context_str}"
                    "\nUse the previous chat history, or the context above, to interact and respond."
                ),
                node_postprocessors=[
                    SimilarityPostprocessor(similarity_cutoff=0.5),
                    LongContextReorder(),
                    SentenceEmbeddingOptimizer(percentile_cutoff=0.5),
                ],
            )

edit: It seems to request some /embeddings which I did not understand.

Plain Text
2024-04-10 11:25:11 - Retrying request to /embeddings in 0.974049 seconds
2024-04-10 11:25:12 - Retrying request to /embeddings in 1.699883 seconds
2024-04-10 11:25:14 - Retrying request to /embeddings in 3.662126 seconds
2024-04-10 11:25:18 - Retrying request to /embeddings in 6.485818 seconds
2024-04-10 11:25:25 - Retrying request to /embeddings in 7.854721 seconds
2024-04-10 11:25:33 - Retrying request to /embeddings in 7.680301 seconds
2024-04-10 11:25:41 - Retrying request to /embeddings in 6.903584 seconds
2024-04-10 11:25:48 - Retrying request to /embeddings in 7.442128 seconds
W
A
14 comments
Yes you can use postprocessors like this
If you have not done Settings.embed_model=embed_mdoel, you'll have to pass embedding model in SentenceEmbeddingOptimizer
check it here : https://docs.llamaindex.ai/en/stable/module_guides/querying/node_postprocessors/node_postprocessors/?h=sentenceembeddingoptimizer#sentenceembeddingoptimizer
I think embeddings calls are being generated from here only
I am using my own Bert Embeddings here which I have done like this as shared before:
Plain Text
Settings.embed_model = BertEmbeddings()

The preprocessing went fine but only it seems to hang at the statement which has node_postprocessors argument.
Will try passing BertEmbeddings() to the SentenceEmbeddingOptimizer to see if it works.
Nevermind. I figured it out. Just passing the object to the SentenceEmbeddingOptimizer was enough. No service context was needed. (The document says it is deprecated).
Yeah that's what I said πŸ˜…, you needed to pass the embedding model
Yeah I was confused at first because I was setting Settings.embed_model = BertEmbeddings() at the beginning.
As a result, working awesome!!!
Attachment
GK0IoYpbAAANSJx.png
Is this hosted somewhere or code is open-source?
It is hosted in my PC. 🀣 I wrote it because Huggingface chat doesn't work. I really liked node postprocessors. Just see! Same model with higher precision:
Attachment
GK0Iu7cbcAA_-BS.png
This looks great! Answer not entirely correct but sources and web search implementations looks neat!
Yes. But when I am in a hurry, I do not like looking at the sources so. (Also, I use llama_index for documents as well now. Earlier it was my 1000 lines of code implementation).
Add a reply
Sign up and join the conversation on Discord