Log in
Log into community
Find answers from the community
View all posts
Related posts
Was this helpful?
π
π
π
Powered by
Hall
Inactive
Updated 2 years ago
0
Follow
Has anyone experienced local Llama cpp
Has anyone experienced local Llama cpp
Inactive
0
Follow
At a glance
A
Adrian
2 years ago
Β·
Has anyone experienced local Llama.cpp models omitting words? The models run great when I run the llama.cpp models via ./server. However, when I use them in LlamaIndex I get responses with omissions. I'm using Llama-2-13B-chat.
L
A
D
11 comments
Share
Open in Discord
L
Logan M
2 years ago
I've seen a few issues about this on the llama-cpp-python github, but haven't read much more into it, might be good to search there
A
Adrian
2 years ago
Nice, thank you @Logan M , I'll check out the repo.
A
Adrian
2 years ago
@Logan M Genius.
https://github.com/abetlen/llama-cpp-python/pull/644
A
Adrian
2 years ago
Thank you, dear friend.
L
Logan M
2 years ago
Nice!
L
Logan M
2 years ago
Heads up though if you update your llama-cpp version
L
Logan M
2 years ago
They stopped supporting GGML files, only GGUF from now on
L
Logan M
2 years ago
need to update the default file thats downloaded in llama-index, but GGUF files aren't too common yet π
A
Adrian
2 years ago
Yeah, I've been converting all my GGML files to GGUF. Good looking out.
D
DangFutures
2 years ago
do you know if gguff run on windows?
L
Logan M
2 years ago
It should! I personally haven't tried though
Add a reply
Sign up and join the conversation on Discord
Join on Discord