Find answers from the community

Updated 2 months ago

Hi

Hi!
Is it possible to run llama_index with only local components on a machine without internet access? I got this error on first import.
Llama_index==0.7.20
L
t
15 comments
If you are running without internet access, you'll need to have an LLM model and embedding model already downloaded to your machine somehow
so thats step #1 πŸ˜…
I have it. This error appears, even before I create classes from llama_index.
It appears right on import llama_index
Because something wants to access the openai api from the __init__.py file.
It appeared after I upgraded from version 0.6.12 to 0.7.20
it's because tiktoken (the tokenizer used on the hood to count tokens, etc.) is trying to be downloaded. You might have to look up how to pre-download and cache this tokenizer
But I don't want to use it. I have another tokenizer
Maybe you know how to do it? And where should I save it so llama-index can take it?
Yea it's pretty baked into the codebase for counting tokens during chunking and other parts of llama-index. Kind of annoying, but it's tech-debt right now
Yea I have no idea lol I'd have to read their source code probably. A quick google search didn't turn up much
for the location
And it looks for a folder in the cache dir by using a sha1 hash of the URL? lol weird
О! Thank you!!! I'll look into that 😒
I think those links should get you where you need to go! πŸ™
Add a reply
Sign up and join the conversation on Discord