Find answers from the community

s
F
Y
a
P
Updated 10 months ago

Hey everyone, does anyone have tips on

Hey everyone, does anyone have tips on integrating the SubQuestionQueryEngine with a query planning agent, or alternatively, a way to not produce subquestions at times, all within a chat context? ie. user asks question, system may or may not use subquestionqueryengine, question/answer then stored in chat memory
L
D
4 comments
You could create a custom query engine to use, that just calls the LLM if there's no questions to be called
thanks will give that a try
An example of a custom query engine

Plain Text
from llama_index.query_engine import CustomQueryEngine
from llama_index.llms import OpenAI

class LLMQueryEngine(CustomQueryEngine):
  llm: OpenAI

  def custom_query(self, query_str: str):
    response = llm.complete(query_str)
    return str(response)

llm_query_engine = LLMQueryEngine(llm=OpenAI())
Add a reply
Sign up and join the conversation on Discord