Find answers from the community

Home
Members
MorugaScorpion
M
MorugaScorpion
Offline, last seen 3 months ago
Joined September 25, 2024
Are the kwargs for as_query_engine() documented?
4 comments
L
Is it possible to get the chunk(s) that was retrieved by a query engine?

VectorStoreIndex.from_documents().as_query_engine() has type llama_index.query_engine.retriever_query_engine.RetrieverQueryEngine but https://gpt-index.readthedocs.io/en/latest/reference/query/query_engines/retriever_query_engine.html does not have a query() function ....

This documentation is difficult to deal with
3 comments
L
Is it possible to use a Dense Passage Retriever (i.e. separate embedding models for queries and the indexed chunks/passages)

This would require a separate embedding model for the chunks/passages and the queries

Hypothetical document embeddings (HyDE) is trying to solve the same problem but seems like a hack. Unlikely to work on rare phrases, since the generative model wouldn't know the answer to the query
4 comments
L
M
For example I would like to see what prompts are being created and used to process context chunks
1 comment
L