Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated last year
0
Follow
LlamaCPP
LlamaCPP
Inactive
0
Follow
M
M00nshine
last year
Β·
how can I connect my llama_index to my lan hosted Llama.cpp server api ?
L
M
4 comments
Share
Open in Discord
L
Logan M
last year
I have no idea what the interface is, but if it's similar to openai API requests, you can use this (I guess there's no example yet, rip)
https://github.com/run-llama/llama_index/blob/95e107423664812eeece1af0f162c9dcd4bfe670/llama_index/llms/openai_like.py#L9
Or you'll have to implement a custom LLM and manually send the requests
https://gpt-index.readthedocs.io/en/stable/module_guides/models/llms/usage_custom.html#example-using-a-custom-llm-model-advanced
M
M00nshine
last year
im trying to connect to my llama.cpp api I have running locally on a different machine. I compiled the llama.cpp source to run with clblas running 6 gpus. my llama.cpp instance is serving a llama 2 based model.
M
M00nshine
last year
do I just replace the "llm = LlamaCPP" with the openai_like code?
L
Logan M
last year
I thiiiink so?
Add a reply
Sign up and join the conversation on Discord
Join on Discord