The community member is asking if they can skip using OpenAI and instead use Google's gemini-pro/embedding LLM for everything. Another community member responds that yes, this is possible, and provides instructions on how to set up the service context to use the desired LLM and embedding model. The instructions include links to documentation on supported embedding models and LLMs. The original poster then expresses gratitude for the helpful response.
Hello, I've been trying to find an answer in the docs, but, I'm not very well versed in this stuff yet. Would I be able to skip OpenAI all together and use google's gemini-pro/embedding llm for everything?