Find answers from the community

Updated last year

Llamacpp

At a glance

The community member is looking for an example of how to use the llama_index library to call a locally hosted llama.cpp API. The comments suggest that the community member needs to use the custom LLM (Language Model) class to interact with the hosted LLM server. One community member refers to a specific thread on Discord that may provide more information. Another community member confirms that the goal is to connect to a running Llama.cpp server API. The final comment indicates that since this is the community member's own hosted LLM, they will need to use the custom LLM abstract from llama_index and define the interaction.

Useful resources
im looking for an example of llama_index calling a locally hosted llama.cpp api. Does anyone have an example?
W
M
4 comments
You need to use the custom llm class to interact with a hosted llm server


Refer to this thread:
https://discord.com/channels/1059199217496772688/1163395083475898410/1163727902169366559
that is to connect to a running Llama.cpp server api?
Since this is your own hosted llm, you'll need to use custom llm abstract from llamaindex and define the interaction
Add a reply
Sign up and join the conversation on Discord