The community member is trying to implement RAG (Retrieval-Augmented Generation) using the Gemini-Pro model, which is a free API. They have already implemented RAG using OpenAI and stored the embeddings in Elasticsearch, but they are unable to find the documentation for adding Gemini-Pro as an LLM model/embedding. The comments suggest that the community member can use Gemini-Pro with Vertex AI or the Gemini LLM class, but there is no explicitly marked answer.
Im trying to implement rag and I have already implemented using open ai and have stored the embeddings in elastic search and used open ai for retrieval and output. Now i want to try the same with gemini-pro model. How can i do that? I can only see customizations with paid llm apis. Gemini pro is free api and I am unable to find the documentation for adding gemini pro as llm model/embedding. Can you please help?