A community member is working on a RAG workflow for their model and is having trouble integrating Grok into their LLM reranking system. They are using the OpenAI SDK and the Grok documentation says it's compatible, but they can't get it to run. They are also trying to implement this in the synthesize() step and having the same issue. The community members discuss whether Grok can be used for indexing, and it's clarified that LLMs do not index, but the embedding model would. One community member suggests that if something takes an LLM parameter, you can likely use any LLM, including Grok, in that context.
Hi all, I am working on my RAG workflow for my model and need some guidance. I want to integrate Grok into my LLM reranking system but it's not working for me. Grok's documentation says it's compatible with the OpenAI SDK so I am assuming this is the correct format, but I still can't get it to run: ranker = LLMRerank(
choice_batch_size=5, top_n=5, llm=OpenAI(model="grok-2-latest", api_key=XAI_API_KEY, api_base="https://api.x.ai/v1")
) I'm also trying to implement this in the synthesize() step and having the same issue. Any guidance would be appreciated!
I'm using llm = OpenAI(model="gpt-3.5-turbo-1106", temperature=0.1) in my indexing pipeline to create the chunks. Is there a way I can replace this with a Grok model?