Find answers from the community

Home
Members
anonymous_lambda
a
anonymous_lambda
Offline, last seen 3 months ago
Joined September 25, 2024
Hello, is there a reason a Llama LLM is loaded up during index creation?

INFO:sentence_transformers.SentenceTransformer:Load pretrained SentenceTransformer: /home/ubuntu/e5-base-v2
Load pretrained SentenceTransformer: /home/ubuntu/e5-base-v2
INFO:sentence_transformers.SentenceTransformer:Use pytorch device: cpu
Use pytorch device: cpu
**
Could not load OpenAI model. Using default LlamaCPP=llama2-13b-chat. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys

**
llama_model_loader: loaded meta data with 19 key-value pairs and 363 tensors from /tmp/llama_index/models/llama-2-13b-chat.Q4_0.gguf (version GGUF V2 (latest))
5 comments
L
a
T
Hello, is there a way to persist the index that the bm25retriever builds to disk? Thank you.
2 comments
L
Hello, is there an updated example on how to implement a custom embedding model? Looks like there are a few new abstract methods implemented in BaseEmbedding that aren't accounted for in the example (https://gpt-index.readthedocs.io/en/latest/examples/embeddings/custom_embeddings.html). Thanks!
2 comments
a
L