Just install the integration you want and use it. For example, I would personally use ollama and huggingface
pip install llama-index-embeddings-huggingface llama-index-llms-ollama
Install and start ollama (
ollama serve
) and download the LLM to use (
ollama pull <model>
)
Then:
from llama_index.core import Settings
from llama_index.embeddings.huggingface import HuggingfaceEmbedding
from llama_index.llms.ollama import Ollama
Settings.llm = Ollama(model="<model>", request_timeout=3000.0)
Settings.embed_model = HuggingfaceEmbedding(model_name="BAAI/bge-small-en-v1.5", embed_batch_size=2)
<rest of code below