Hi, I have a fairly simple use case - my dataset is small enough to always fit into a query, so I'm always preparing a single manual TextNode and creating a new index with that:
nodes = [TextNode(text=input)]
index = VectorStoreIndex(nodes=nodes, show_progress=True)
query_engine = index.as_query_engine()
prompt = get_prompt_from_promptlayer(name)
result = await query_engine.aquery(prompt)
I noticed that my llama_index is still creating embeddings for my query - why would it do that? How can I tell it to just use the nodes I have provided? Also, is there any direct benefit for using llama_index with such a use case?