Find answers from the community

s
F
Y
a
P
Home
Members
bbornsztein
b
bbornsztein
Offline, last seen last month
Joined September 25, 2024
Getting this error when using the GPTPineconeIndex (on a rather large set of documents):

Plain Text
ValueError: Effective chunk size is non positive after considering extra_info


or others: any idea what's going on?
6 comments
j
b
Not sure how to resolve this error: The batch size should not be larger than 2048
6 comments
j
b
πŸ‘‹ I added Zendesk API, Intercom API, and Wordpress API document loaders to AgentHQ this morning. Happy to contribute back llama hub if there's interest.
12 comments
b
j
A
I was just looking at: https://github.com/abhijithneilabraham/tableQA

Not sure about the timeseries stuff, and not connected/integrated with LlamaIndex, but worth a look for QA over tabular data.
7 comments
j
r
b
Getting this error when running a GPTSimpleVectorIndex:

A single term is larger than the allowed chunk size. Term size: 511 Chunk size: 512Effective chunk size: 476

What's going on there?
8 comments
b
j
Right - so I'm doing this and finding the ListIndex is much faster with required_keyword (makes sense).

Plain Text
keywords = extract_keywords_given_response(
                input, start_token="")

query_result = index.query(
                input, verbose=True, llm_predictor=llm_predictor, required_keywords=keywords)


For large indexes I was finding it too slow to be usable. Wondering if extracting keywords should be a default (or at least suggested)
7 comments
j
b
y
Wondering if there's any way to save/load an index to S3 or some other filestore? I suppose we'd then need to download the whole index every time a query is run?
11 comments
j
b