Find answers from the community

Home
Members
Jatin.K
J
Jatin.K
Offline, last seen 7 days ago
Joined September 25, 2024
Using Notebook (https://docs.llamaindex.ai/en/stable/examples/embeddings/huggingface/) to convert HF embed model into ONNX. Produced model works fine in Notebook but when I download the files in local machine and try to run; it throws below error: "OSError: Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted."
3 comments
J
L
I have around 1000 pdf documents (slides, scientific publications etc.). I want to create summary of each document. As per my understanding i need to use SummaryIndex(https://docs.llamaindex.ai/en/stable/api_reference/indices/summary/) and not DocumentSummaryIndex(https://docs.llamaindex.ai/en/stable/examples/index_structs/doc_summary/DocSummary/). Can someone confirm? And any tips before i setup the pipeline.
12 comments
J
W
I am replacing default chatgpt with anthropic llm in tutorial(https://docs.llamaindex.ai/en/stable/examples/index_structs/doc_summary/DocSummary/). But it is still asking me for OpenAI API key, is it for embeddings? If yes, how to make it completely Anthropic based tutorial?
5 comments
W
J
Anyway to show the progress/timing stats in SimpleDirectoryReader? Like how much time i need to wait for 1000 PDFs in the folder needed to be ingest?
3 comments
W
J
Trying to replicate Faithfullness eval(https://docs.llamaindex.ai/en/stable/examples/evaluation/faithfulness_eval/) on my RAG bot. But it sometime takes 15 mins, 30 mins, 1hr etc. Anyway to debug it? thanks.
10 comments
L
J
Example (https://docs.llamaindex.ai/en/stable/examples/evaluation/faithfulness_eval/) not giving snippet to download NYC Wikipedia page. Is there any other example i can borrow that snippet from? thanks.
1 comment
L
Does any of these module loads warnings are fatal? Or i can just ignore them?
1 comment
L
I am using locally persisted DuckDB vector store(https://docs.llamaindex.ai/en/stable/examples/vector_stores/DuckDBDemo/) for my RAG app. I am observing two strange behavior: 1) If i add 74KB file into a persisted duckdb file, it increase the file of updated duckdb file by 5MB and on a second instance it increase the existing duckdb file (950MB) by 300 MB by just adding 3 more PDFs of 5MB each. 2) If i delete the newly added docs, the edited duckdb file size doesn't goes back to original size, in fact it stays the same as it was before deletion. I'm using .add and .delete methods to add(nodes) or delete(docs_ids). Can anybody provide any hint? Thanks.
1 comment
L
Did anyone know about this warning in Faithfulness Evaluator.
3 comments
W
J
How I can get the retrieved nodes for a given query in chat engine(https://docs.llamaindex.ai/en/stable/examples/chat_engine/chat_engine_context/). Any hint of the steps invovled is appreciated.
3 comments
t
W
J
1 comment
L
Created DuckDB vector store(https://docs.llamaindex.ai/en/stable/examples/vector_stores/DuckDBDemo/) only letting using .delete and not other like .add or .upsert.
28 comments
J
W
L
J
Jatin.K
·

Chat

I am building a RAG chatbot. What's the difference between below two in this tutorial(https://docs.llamaindex.ai/en/stable/examples/cookbooks/llama3_cookbook_ollama_replicate/). Aren't they both serve same purpose?
2 comments
L
J
Jatin.K
·

Tokenizer

Error in changing the global tokenizer to match GGUF LLM. Huggingface repo have GGUF files only and not the config.json
5 comments
L
J
Every tutorial about HuggingFaceLLM has two part: download LLM and then infer it. What about if I am building a product to be used offline and i just want to do 2nd part. I need to supply the LLM with the software package. How to connect this LLM to HuggingFaceLLM?
6 comments
t
J
Why I can't find any example of using Llama 3.1 without using Ollama?
3 comments
W
L
Following runs fine on Google Colab hosted runtime but when i run same notebook on local hosted runtime, it throws below error: "RuntimeError: Failed to import optimum.onnxruntime.modeling_ort because of the following error (look up to see its traceback):
cannot import name 'OfflineModeIsEnabled' from 'huggingface_hub.errors' (/home/jatink/.local/lib/python3.10/site-packages/huggingface_hub/errors.py)"
2 comments
L
W
J
Jatin.K
·

Cassandra

Anyone getting following error on this tutorial(same error on Colab GPU and local runtime): https://docs.llamaindex.ai/en/stable/examples/vector_stores/CassandraIndexDemo/
1 comment
W
1 comment
W