Hello. I apologize if this is a trivial question, but I'm having difficulty with it. Is it possible to create a ServiceContext that can access a remote LLM (I am using LlamaCPP with the built-in CustomLLM implementation)?
Currently, I am working in a standalone environment where the index and model are in the same process. However, I now want to run the LLM server on a separate PC. Are there any pre-existing adapters available, or should I develop this adapter myself?