Find answers from the community

Updated 2 months ago

Kwargs

At a glance

The community member is asking about the **kwargs of the LLM.complete method and how to use parameters on the fly, instead of defining the LLM object with these parameters. Another community member suggests that the kwargs are typically sent directly to the LLM API being used, and that it's endless and changes per LLM. The original community member tried this without success, but then was able to make it work by specifying the generation_config keyword when calling LLM.complete(). The answer is that the community member needs to use the generation_config keyword to pass parameters on the fly when calling LLM.complete().

Hello everyone, is there some documentation on the **kwargs of LLM.complete ? Typically, I am trying to use some parameters on the fly, within the messagemethod such as LLM.complete(prompt=prompt, temperature=1.0, output_tokens=300) etc., instead of defining the LLM object with these parameters. Is there a way to do so? I haven't been able to figure this out despite quite a bit of time on the matter.
L
A
3 comments
The kwargs are typically sent directly into the llm api being used (i.e maybe the openai api)

It's basically endless and changes per llm
I have tried this without success, I'll try with more care then, thank you
Actually after looking closely I've been able to make it work. For the record (and for Gemini that I am using right now) you need to specify LLM.complete(prompt, generation_config = {'temperature: 1.0, 'other argument': something}). What I was missing is the generation_configkeywork. Thank you for your help.
Add a reply
Sign up and join the conversation on Discord