Find answers from the community

Updated 3 months ago

Hello,

Hello,
Is there a way to just get the context from the query ? For example I want to extract the context without having to make a call to the LLM ?
I’m trying to use vector store index alongside reranking
1
W
S
s
7 comments
Hi yes, you can create a retriever which will fetch the related docs from the given source of documents.

Plain Text
retriever = index.as_retriever()
nodes = retriever.retrieve("Who is Paul Graham?")

# check on node
for node in nodes:
  print(node)
@WhiteFang_Jr Is it possible to then pass the nodes to a query engine?

I am looking for a way to return both the query response and the metadata of the nodes used in the context.
With the above changes, You can directly interact with the llm if you only want to use the fetched nodes and get answer.

Plain Text
retriever = index.as_retriever()
nodes = retriever.retrieve("Who is Paul Graham?")
# check on node
text = ""
for node in nodes:
  text = text + node.text
  # get metadata info as well
  metadata = node.extra_info
response = llm.complete(text+"\n\nYour question")


Let me know if I got your query right 😅
awesome that worked - thanks @WhiteFang_Jr
Thanks @WhiteFang_Jr do you know if it’s possible to use a sentence transformer reranker in addition ?
Yeah you can I think,
https://docs.llamaindex.ai/en/stable/examples/node_postprocessor/LLMReranker-Lyft-10k.html

But reranker will only work for the retrieved_nodes part only.

Also for sentence-transformer reranker, You can pass it of as a LLM and define the service_context for LLMReranker.
I’m not sure to understand how to pass the sentence transformer reranker to the service context, when I do it, i get the error message that it has no attribute metadata
Add a reply
Sign up and join the conversation on Discord