Find answers from the community

Updated last year

Would it be possible to create a

Would it be possible to create a condensechat engine with a custom retriever? My idea would be to specify the query engine as a vectorretriever engine and then specify the custom retriever as the retriever for said engine
L
a
4 comments
for sure.

You can setup the query engine from scratch with whatever retirever you want, and then you can construct the condense chat engine with that query engine

Setting the retriever
https://gpt-index.readthedocs.io/en/stable/end_to_end_tutorials/usage_pattern.html#configuring-response-synthesis

Creating the chat engine
Plain Text
from llama_index.chat_engine import CondenseQuestionChatEngine

chat_engine = CondenseQuestionChatEngine.from_defaults(query_engine, ...)
like this sort of "condense_engine = CondenseQuestionChatEngine.from_defaults(
service_context=service_context,
query_engine=RetrieverQueryEngine(
retriever=FAISSRetriever(
index=vector_store_index,
embed_model=self.embed_model
)
)

)"
yea that works! I think!
Awesome thank you so much!
Add a reply
Sign up and join the conversation on Discord