Find answers from the community

Updated 6 months ago

Hello, I've been trying to find an

At a glance

The community member is asking if they can skip using OpenAI and instead use Google's gemini-pro/embedding LLM for everything. Another community member responds that yes, this is possible, and provides instructions on how to set up the service context to use the desired LLM and embedding model. The instructions include links to documentation on supported embedding models and LLMs. The original poster then expresses gratitude for the helpful response.

Useful resources
Hello, I've been trying to find an answer in the docs, but, I'm not very well versed in this stuff yet. Would I be able to skip OpenAI all together and use google's gemini-pro/embedding llm for everything?
L
i
2 comments
Yes you can -- We have many integrations for LLMs and embedding models

Mostly just need to set them in your service context

Plain Text
from llama_index import ServiceContext, set_global_service_context
set_global_service_context(ServiceContext.from_defaults(llm=llm, embed_model=embed_model))


That covers 90% of cases, but then, there's a few stray components that take an llm as a kwarg directly, depending on what you are doing

Embedding models:
https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#list-of-supported-embeddings

LLMs
https://docs.llamaindex.ai/en/stable/module_guides/models/llms/modules.html
perfect thank you so much!
Add a reply
Sign up and join the conversation on Discord