Find answers from the community

Updated 2 months ago

The json is a big object and in the

The json is a big object and in the explanation ollama model doesn't read it correctly and gives wrong values. Meta-Llama-8B does it perfectly. Could it be the quantization?
W
G
L
3 comments
The Ollama model is quantize to 4-bit. I think that is not possible run the original Meta-Llama 3 model in Ollama 😦
Could also be different sampling params (temperature, etc.)
Add a reply
Sign up and join the conversation on Discord