I have a weird question. So I didn't setup my API key correctly and it defaulted to use llama-13b. I was wondering if it's possible to make so that it always uses llama-13b(the one I downloaded)
ahh I see! Thanks Logan, you've saved my project. Question, if I wanted to just use the default openai model(as llama is taking a while to load) but also wanted to include a custom default system prompt, what would I do?
Also in addition, is there anyway I can make it give more output than what it can in a sense? I always feel like the output is always short by default. Maybe I should play with the temperature?