Find answers from the community

Home
Members
AvishWagde
A
AvishWagde
Offline, last seen 3 months ago
Joined September 25, 2024
Ohh okay lemme check if it's faster that way...but also creating the embeddings took too much time for me.. for a 4MB file it took 8-9 hours man!!
1 comment
L
does this help?
I think you can directly ask your questions on their documentation chat bot...
10 comments
n
A
@kapa.ai I have a persisted vector store and I'm loading it as StorageContext. How can I get/iterate all the documents from that store?
How can I get the embedding value from a Document?
How can I query a VectorStoreIndex using raw embedding vector instead of string?
3 comments
k
@kapa.ai any help?
2 comments
k
I'm getting response as "ERROR: The prompt size exceeds the context window size and cannot be processed." while quering a document using LLM QA bot , how to solve this
https://stackoverflow.com/questions/76873456/error-the-prompt-size-exceeds-the-context-window-size-and-cannot-be-processed
3 comments
A
L