Find answers from the community

Home
Members
Akarshan Biswas
A
Akarshan Biswas
Offline, last seen 3 months ago
Joined September 25, 2024
Hello. Can I use node postprocessors like this?
Plain Text
Settings.embed_model = BertEmbeddings()
# <....>
ce = index.as_chat_engine(
                chat_mode="condense_plus_context",
                memory=memory,
                context_prompt=(
                    "An arxiv paper/document has been attached, take the help of its context to respond."
                    "Here are the relevant contexts of the paper:\n"
                    "{context_str}"
                    "\nUse the previous chat history, or the context above, to interact and respond."
                ),
                node_postprocessors=[
                    SimilarityPostprocessor(similarity_cutoff=0.5),
                    LongContextReorder(),
                    SentenceEmbeddingOptimizer(percentile_cutoff=0.5),
                ],
            )

edit: It seems to request some /embeddings which I did not understand.

Plain Text
2024-04-10 11:25:11 - Retrying request to /embeddings in 0.974049 seconds
2024-04-10 11:25:12 - Retrying request to /embeddings in 1.699883 seconds
2024-04-10 11:25:14 - Retrying request to /embeddings in 3.662126 seconds
2024-04-10 11:25:18 - Retrying request to /embeddings in 6.485818 seconds
2024-04-10 11:25:25 - Retrying request to /embeddings in 7.854721 seconds
2024-04-10 11:25:33 - Retrying request to /embeddings in 7.680301 seconds
2024-04-10 11:25:41 - Retrying request to /embeddings in 6.903584 seconds
2024-04-10 11:25:48 - Retrying request to /embeddings in 7.442128 seconds
14 comments
A
W
Very interesting that the query_engine prompts the model like this. Is it possible to customize this?
5 comments
A
W
Is it possible to use GPT4ALL embed in llama_index?
2 comments
A
W