Find answers from the community

Updated 10 months ago

Hi, I have a fairly simple use case - my

Hi, I have a fairly simple use case - my dataset is small enough to always fit into a query, so I'm always preparing a single manual TextNode and creating a new index with that:

Plain Text
nodes = [TextNode(text=input)]
index = VectorStoreIndex(nodes=nodes, show_progress=True)
query_engine = index.as_query_engine()
prompt = get_prompt_from_promptlayer(name)
result = await query_engine.aquery(prompt)

I noticed that my llama_index is still creating embeddings for my query - why would it do that? How can I tell it to just use the nodes I have provided? Also, is there any direct benefit for using llama_index with such a use case?
L
Ł
6 comments
Don't use the vector index in this case, just use the SummaryIndex -- it will always fetch all nodes (in this case, the single node) without embeddings
The benefit here is mostly the storage and response synthesis code
tbh if you wanted, you could just use a response synthesizer directly
Thanks, I was looking for that
Plain Text
from llama_index.core import get_response_synthesizer

synth = get_response_synthesizer(response_mode="compact", llm=llm)

response = synth.get_response(query_str, ["text1", "text2", ...])
probably the easiest setup tbh
Add a reply
Sign up and join the conversation on Discord