Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
๐
๐
๐
Powered by
Hall
Inactive
Updated 3 months ago
0
Follow
mrm8488/longformer-base-4096-finetuned-s...
mrm8488/longformer-base-4096-finetuned-s...
Inactive
0
Follow
n
nbulkz
10 months ago
ยท
The end goal is to use this model to do extraction using llama index for the convenience ...
https://huggingface.co/mrm8488/longformer-base-4096-finetuned-squadv2
L
n
G
4 comments
Share
Open in Discord
L
Logan M
10 months ago
probably youd want to use a vector store index
A summary index will send every chunk to the LLM for every query (which is more geared towards summarizing vs. QA)
n
nbulkz
10 months ago
I do want to send every chunk to the LLM. The Answer should come directly from the document
n
nbulkz
10 months ago
as in, an exact textual match (which this model seems to do) with the provided start and end positions
G
GeoloeG
10 months ago
then you just need the chunks associated to your answer with your extra metadata, so @Logan M is 99.9% correct
Add a reply
Sign up and join the conversation on Discord
Join on Discord