The community member is using a query engine to fetch data from Qdrant and generate responses. They found that user input is sometimes poor, so they try to enhance it by asking the language model to provide a list of semantic keywords. The community member wonders how to use these semantic keywords as search parameters while still using the user's input as the query, or if there is a way to prepare the context themselves and then do the language model response in a second step.
In the comments, another community member suggests checking the "chat_engine with condense + context mode" feature, which helps in preparing the updated question based on the user conversation and then asking the index. However, the original poster says that this did not work for them, and they now talk directly to the language model, which helped them get it working.
Hello, I'm using a query engine to fetch data from qdrant and then generate a response. I found that the user input sometimes is very poor and so I try to enhance that by asking the llm to give a list of semantic keywords to the users input. that as such works well... now I wonder how can I manage to use the semantic keywords as search parameters while still use the users input as query. or is there a way where I prepare the context myself and then do the llm response in a second step ?