Find answers from the community

Updated 6 months ago

Hello, I'm using a query engine to fetch

At a glance

The community member is using a query engine to fetch data from Qdrant and generate responses. They found that user input is sometimes poor, so they try to enhance it by asking the language model to provide a list of semantic keywords. The community member wonders how to use these semantic keywords as search parameters while still using the user's input as the query, or if there is a way to prepare the context themselves and then do the language model response in a second step.

In the comments, another community member suggests checking the "chat_engine with condense + context mode" feature, which helps in preparing the updated question based on the user conversation and then asking the index. However, the original poster says that this did not work for them, and they now talk directly to the language model, which helped them get it working.

Useful resources
Hello, I'm using a query engine to fetch data from qdrant and then generate a response. I found that the user input sometimes is very poor and so I try to enhance that by asking the llm to give a list of semantic keywords to the users input. that as such works well... now I wonder how can I manage to use the semantic keywords as search parameters while still use the users input as query. or is there a way where I prepare the context myself and then do the llm response in a second step ?
W
o
2 comments
Check chat_engine with condense + context mode.
That helps in first preparing the updated question based on user conversation and then asks the index.

https://docs.llamaindex.ai/en/stable/examples/chat_engine/chat_engine_condense_plus_context/
id did not do the trick, I talk now direct to the llm that way I got it working, but thank you πŸ™‚
Add a reply
Sign up and join the conversation on Discord