Find answers from the community

Updated 3 months ago

Embedding

,

Hi, thank you for all your help in the past days. Unfortunately i have another more generic question on retrievers. So i have very long prompts and i want to weigh in a specific sentence more than the rest for retrieval. E.g. the embeddings of a sentence count double or something similar. How would i go about doing something like this?
L
f
8 comments
Hmm, that's not really possible.

What I think you could do is separate the query string from the embedding string

Plain Text
from llama_index import QueryBundle

resp = query_engine.query(QueryBundle(query_str="...", embedding_strs=["...", ...]))
Hmmm, still a bad performance on th eretriever. I am using a hybrid of vector similarity and BM25 to no avail. It jut does not want to retrieve the right document. Even though it is obvious for a human that this document should be needed to answer the question. However my prompt has multiple questions and incorporates the entire history of a patient. Any idea about a retriver that works for this use case?
Maybe you need a step to take the prompt and have the llm rewrite it into a query (or multiple queries), and retrieve with that?
@dev_advocate @Logan M @WhiteFang_Jr Hi, hope you are doing good. I am on track on finding out why my retrievers suck. I printed out all the retrieved nodes and actually found the one, which i am looking for. It wasnt raked bad as well with a solid 0.89.
HOWEVER, some other nodes in my retreval are valued with scores in the tens: like 13.7 or 18.3.
Way to high. I thought the scores where normalized between 0 and 1.

i am still using the hybrid retriever, Might that be a pobelm?
class HybridRetriever(BaseRetriever): def __init__(self, vector_retriever, bm25_retriever): self.vector_retriever = vector_retriever self.bm25_retriever = bm25_retriever super().__init__() def _retrieve(self, query, **kwargs): bm25_nodes = self.bm25_retriever.retrieve(query, **kwargs) vector_nodes = self.vector_retriever.retrieve(query, **kwargs) all_nodes = [] node_ids = set() for n in bm25_nodes + vector_nodes: if n.node.node_id not in node_ids: all_nodes.append(n) node_ids.add(n.node.node_id) return all_nodes
Correct, the bm25 scores are ranked/scored differently.

Typically you'll want to apply some ranking algorithm (reciprocal rank fusion for example)

At the very least, you can normalize the scores from bm25
found this:
vector_retriever = index.as_retriever(similarity_top_k=10) bm25_retriever = BM25Retriever.from_defaults(nodes=nodes, similarity_top_k=10) vary_question_tmpl = """\ Du bist ein KI-Assistent. Dein Ziel ist es, für eine Frage zu der Leitlinie des Mammakarzinoms, bis zu {num_queries} Variationen dieser Frage zu erstellen. Sei kreativ! Zum Beispiel: Basisfrage: Sollte man unter der Berücksichtigung einen Gentest machen? Fragevariationen: Wann sollte ein Patient unter den Auflagen genetisch Untersucht werden? Ist eine genetische Laboruntersuchung hierbei notwendig? Sollte Frau X sich einer genetischen Testung unterziehen um das Riskio besser einschätzen zu können? Versuchen wir es jetzt! Basisfrage: {query} Frage-Variationen: """ hybrid_retriever = QueryFusionRetriever( [vector_retriever, bm25_retriever], llm=llm, similarity_top_k=10, num_queries=4, mode="reciprocal_rerank", use_async=True, verbose=True, query_gen_prompt=vary_question_tmpl )
Solved all my problems for now
Yup, that does the job 🙂
Add a reply
Sign up and join the conversation on Discord