Find answers from the community

Updated 3 months ago

mrm8488/longformer-base-4096-finetuned-s...

The end goal is to use this model to do extraction using llama index for the convenience ... https://huggingface.co/mrm8488/longformer-base-4096-finetuned-squadv2
L
n
G
4 comments
probably youd want to use a vector store index

A summary index will send every chunk to the LLM for every query (which is more geared towards summarizing vs. QA)
I do want to send every chunk to the LLM. The Answer should come directly from the document
as in, an exact textual match (which this model seems to do) with the provided start and end positions
then you just need the chunks associated to your answer with your extra metadata, so @Logan M is 99.9% correct
Add a reply
Sign up and join the conversation on Discord