Find answers from the community

Updated 4 months ago

How to use Ollama for Embeddings?

At a glance

The post asks "How to use Ollama for Embeddings?". The comments provide a code snippet that demonstrates how to install the llama-index-embeddings-ollama package and create an OllamaEmbedding object with the "mistral" model and a request timeout of 60 seconds. There is no explicitly marked answer in the comments.

How to use Ollama for Embeddings?
L
1 comment
pip install llama-index-embeddings-ollama

from llama_index.embeddings.ollama import OllamaEmbedding
embed_model = OllamaEmbedding(model="mistral", request_timeout=60.0)
Add a reply
Sign up and join the conversation on Discord