Find answers from the community

t
tshu
Offline, last seen 3 months ago
Joined September 25, 2024
what is llm predictor?
and then what is embedding?
which open ai model is best for both
3 comments
j
A
t
i have this one question. when we parse large pdf files it gets structured into chunks and only relevant chunks are fed to gpt. So the gpt algo doesn't have the idea of the whole pdf and is actually working only on a chunk. But what if one chunk has some data which requires understanding of (i.e. it is related to some other chunk)


EG.
chunk256: alex loves bob.
chunk290: bob loves only costa.

now if i ask a question is alex's love 2 sided, should i expect a correct answer. And if yes how is that happening
3 comments
H
j
i have made simplevectorindex from 3 docs adding up to 1000 pages. now querying it is taking 1minute sometimes. what is the best way to reduce this time
3 comments
A
E
@jerryjliu0 @Logan M @hwchase17 what is the difference between declaring llm_predictor while defining the index like this:
VectorIndex = GPTSimpleVectorIndex(documents, llm_predictor=llm_predictor)
and while querying the index
response = index.query(question,llm_predictor=llm_predictor)
1 comment
j
any idea why i am encountering this error. was working fine in local environment. had this error while deploying the server to render

ImportError: cannot import name 'Memory' from 'langchain.chains.base'
1 comment
t
39 comments
n
j
L
G
k
can i use javascript for llamaindex?
1 comment
j
how are you all using gptindex with javascript. are u all using spawn process to run python script from node js?
2 comments
j
L
can someone help me on how to query an index in pinecone using python?
1 comment
f