Find answers from the community

Home
Members
wrapdepollo
w
wrapdepollo
Offline, last seen 3 months ago
Joined September 25, 2024
Hi guys,
I'm currently trying to load an index from one I created using a local model (zephyr), however when using load_index_from_storage I get an error asking for the OpenAI API key. I saw on the storing documentation (https://docs.llamaindex.ai/en/stable/understanding/storing/storing.html#persisting-to-disk) that I needed to pass my custom service_context during the load but unsure of how to do it (it's not allowed as an argument). Any help would be appreciated, thanks!
3 comments
w
W
E
Hi guys, could somebody enlighten me with RAG context size? If models have a context length, then the additional context we retrieve has to fit into this context size or it's different? I guess that'd affect the number and size of the chunks retrieved. Thanks!
3 comments
L
w
T
Hi guys, was just reading the Document Management page (https://docs.llamaindex.ai/en/stable/module_guides/indexing/document_management.html) and the linked notebook file can't be accessed (https://github.com/run-llama/llama_index/blob/main/examples/paul_graham_essay/InsertDemo.ipynb), unless the branch is changed (for example, https://github.com/run-llama/llama_index/blob/8147-bug-streaming-on-react-chat-agent-not-working-as-expected/examples/paul_graham_essay/InsertDemo.ipynb). Just wanted to mention it in in case this was not intentional or if anyone needed access to the notebook.
2 comments
w
W
Hi guys, I currently added a SimilarityPostprocessor to my query engine. It works fine, but whenever the similarity index is below the threshold, the answer is "Empty Response". Changing the prompt does not seem to work, is there any option to change this behaviour? Thanks!
4 comments
L
w
T
Hi guys, small knowledge question: If I'm creating a vector store index from my documents with a service context, does the llm matter? As in, I am currently creating 2 vector indexes (I'm testing between Mistral and Llama), however noticed that the context selected seems to be the same
3 comments
w
n
T
Hi everyone!
Wanted to see if anyone could give me a hand with my error. I'm trying to compare 2 embedding models (so the combination model/embedding). I'm using BGE-base and a more context-specific huggingface model. Everything works fine when creating the vector index with BGE, however with the same params the other model crashes (RuntimeError: CUDA error: device-side assert triggered), I'm guessing probably by a shape mismatch, but I'm not quite sure where to check this. Thanks for any feedback!
2 comments
w
L
Hi everyone! I've been recently diving into AI/RAG, particularly using LlamaIndex (Love the project, amazing work!) and open source models. I have 2 questions that might come out of my ignorance, but would be great if anyone could answer:
  1. Why do we use top k chunks in similarity instead of setting a similarity threshold?
  2. Is there a "recommended" maximum size/amount of documents for ingestion? As in, after X amount the model might not perform as good as expected?
Thanks!
2 comments
w
T