Find answers from the community

A
Aerk
Offline, last seen 3 months ago
Joined September 25, 2024
Hey I could use some help trying to understand the following:
  • I want to use hybrid search with a weaviate db in a chat engine
  • I see how I can do that with a query engine with `index.as_query_engine(vector_store_query_mode="hybrid"), but I want to use a chat engine
  • I'm using the CONDENSE_PLUS_CONTEXT type and I see in the source that it does:
Plain Text
return CondensePlusContextChatEngine.from_defaults(
                retriever=self.as_retriever(**kwargs),
                llm=llm,
                **kwargs,
            )


Would the following work?
`index.as_chat_engine(vector_store_query_mode="hybrid")?

What if I want to do something more complicated like implement this as the retreiver? https://docs.llamaindex.ai/en/stable/examples/retrievers/auto_merging_retriever.html
5 comments
L
A
Hi! I'm trying to determine how I can get the reference (text string and doc name) for a simple RAG app I'm building with local pdfs. I'm using the framework from create-llama ( LlamaParse too) + weaviate. I'm looking at the following line in chat.py:
Plain Text
 response = await chat_engine.astream_chat(lastMessage.content, messages)


And I'm wondering how I could do this? Do I need to provide a system prompt and hope the LLM returns with something or can I specify something so it returns whatever context str (+ sources) it worked with?
6 comments
A
v
T