Find answers from the community

Updated 3 weeks ago

Integrating Grok Into Llm Reranking System

Hi all, I am working on my RAG workflow for my model and need some guidance. I want to integrate Grok into my LLM reranking system but it's not working for me. Grok's documentation says it's compatible with the OpenAI SDK so I am assuming this is the correct format, but I still can't get it to run:
ranker = LLMRerank( choice_batch_size=5, top_n=5, llm=OpenAI(model="grok-2-latest", api_key=XAI_API_KEY, api_base="https://api.x.ai/v1") )
I'm also trying to implement this in the synthesize() step and having the same issue. Any guidance would be appreciated!
M
L
8 comments
Will test this and get back to you, thank you!
Will this work with indexing too? I want to test if the model can index my chunks better
llms do not index (in most cases) -- that would be the embedding model
This worked, thank you!
I'm using llm = OpenAI(model="gpt-3.5-turbo-1106", temperature=0.1) in my indexing pipeline to create the chunks. Is there a way I can replace this with a Grok model?
Is this like for a metadata extractor or something?

If something takes an llm param, chances are you can put any llm into it
Assuming it's smart enough to follow instructions
Yes it’s extracting metadata to convert the content into chunks. It’s all in a pipeline.
Add a reply
Sign up and join the conversation on Discord