Find answers from the community

Updated 3 months ago

Hey everyone! So I've been trying out

Hey everyone! So I've been trying out different text analyzing chats on both llama.cpp and text-generation-webui. In both cases, the 7b model gives me the correct response in the correct context with the correct information. Using the exact same prompt with the 13b model causes it to just ask me my own question back to me or a sentence like "working it on it... it'll be done soon" and that will be the end of the generation. What am I doing wrong? Why is it hallucinating so much? The bigger model should be better for understanding context right? Any help is appreciated. (Apologize for the double post)

Hardware: Macbook Pro M1 Pro 16gb Memory
L
d
3 comments
prompts are a fickle thing. Generally, you probably need to tweak the prompt for every LLM πŸ˜…
Even for different weights of the same model? Damn πŸ˜‚
yeaaa its pretty annoying haha
Add a reply
Sign up and join the conversation on Discord