Find answers from the community

Updated last year

In my case I want to use async for the

In my case I want to use async for the summary index and the Vector store index, used in the routerquery engine, used in condense question chat engine
L
R
17 comments
use_async is just a paremeter that we pass in, that under the hood using async to speed some thing up, but the actually top-level is still synchronous, so you don't have to use await
Ok, which is the best according to you in my use case ?
If you don't want to deal with await and async functions, use_async is fine

But if you had a server or something where top-level functions need to be async, then using stuff like aquery() and achat() is the way to go
Ok, in my case only llama index functions need this
Finally, btw, how to change/modify/ what are the alternatives for condensequestionchatengine
To change the question just by ramaking it without taking Care of the context/with only the documents maybe in context, but not the chat
You might find CondensePlusContextChatEngine a nice alternative
Although it works a little differently, I can explain
CondenseQuestionChatEngine always takes the chat history and latest user message, formulates a query, and runs the query engine, returning the query engine response directly.

CondensePlusContextChatEngine will take the chat history and latest user message, formulate a query, but only RETRIEVE relevant nodes (i.e. it uses a retriever, not a query engine). It inserts the retrieved context into the system prompt, and then lets the LLM use either the retrieved context or the chat history to answer. This is usually a much more natural conversation feeling.
I remember you are using a router query engine -- you might have to convert that to a router retrieve to use this chat engine
Yes I found out about it with context chat engine
But I need to put in args a retriever or a context prompt, for a summary purpose, it's not what we want, or it will work ?
Yeah I see, that's what I think I've understood by reading it
Yea the context_prompt is just the template for the system prompt.

By default, it looks like this

Plain Text
DEFAULT_CONTEXT_PROMPT_TEMPLATE = """
  The following is a friendly conversation between a user and an AI assistant.
  The assistant is talkative and provides lots of specific details from its context.
  If the assistant does not know the answer to a question, it truthfully says it
  does not know.

  Here are the relevant documents for the context:

  {context_str}

  Instruction: Based on the above documents, provide a detailed answer for the user question below.
  Answer "don't know" if not present in the document.
  """
context_str is where the retriever puts it's retrieved context at each user message
Ok, so, when I use the summary index (retriever here), will it work in the same way ?
Or maybe there is a better practice in this use case for summary purpose ?
Add a reply
Sign up and join the conversation on Discord