Find answers from the community

Updated 4 months ago

Hi

Hi
Im strugling to connect with llama index to private llm that I can reach via https. Any advice how to do that?
W
P
22 comments
How have you setup your llm? Can you share the code if possible?
Its exteral llama instance. I got https and token to it. What is more page dont have ssl
You can refer to custom LLM setup guide here: https://docs.llamaindex.ai/en/stable/module_guides/models/llms/usage_custom/#example-using-a-custom-llm-model-advanced

This can help you connect your custom llm with LlamaIndex env
Thanks! I will check it
Do you have exmple of communication with model in internet for such case? it is missing
You can setup request for such cases:
Plain Text
class OurLLM(CustomLLM):
    context_window: int = 3900
    num_output: int = 256
    model_name: str = "custom"
    dummy_response: str = "My response"

    @property
    def metadata(self) -> LLMMetadata:
        """Get LLM metadata."""
        return LLMMetadata(
            context_window=self.context_window,
            num_output=self.num_output,
            model_name=self.model_name,
        )

    @llm_completion_callback()
    def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:
        response = requests.post(url, payload) # Just add your url and payload and other needed stuff in here
        return CompletionResponse(text=response.text)

    @llm_completion_callback()
    def stream_complete(
        self, prompt: str, **kwargs: Any
    ) -> CompletionResponseGen:
        response = ""
        for token in self.dummy_response:
            response += token
            yield CompletionResponse(text=response, delta=token)


# define our LLM
Settings.llm = OurLLM()
In a mean while I handle that somehow! I will share it later
def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:
llm = OpenAI(base_url="https://llm.com", api_key="xxxxxxxxxxxxxxxxxxxxxxxxxx", http_client=httpx.Client(verify=False))
response = llm.chat.completions.create(
model="",
messages=[
{
"role": "user",
"content": prompt
}
],
)
self.dummy_response=response.choices[0].message.content
return CompletionResponse(text=self.dummy_response)
i solved it in more or less simillar way. Thanks for help!
If the llm supports OpenAI type request schema then you dont need to use custom llm.
Simply define your llm and use it.

Plain Text
from llama_index.core import Settings
Settings.llm  = OpenAI(your details) # This will define your llm globally!

# every where your llm is going to be used. 
response = index.as_query_engine.query("your_query") 
i will check it!
its want metadata in this way and i dont know how to add it here πŸ™‚
Can you elaborate more on this?
Code:

from llama_index.core import Settings
Settings.llm = OpenAI(
base_url="https://llm.com",
api_key="XXXXXXX",
http_client=httpx.Client(verify=False))
Error:

File ~/.config/jupyterlab-desktop/jlab_server/lib/python3.12/site-packages/llama_index/core/response_synthesizers/factory.py:74, in get_response_synthesizer(llm, prompt_helper, service_context, text_qa_template, refine_template, summary_template, simple_template, response_mode, callback_manager, use_async, streaming, structured_answer_filtering, output_cls, program_factory, verbose)
68 prompt_helper = service_context.prompt_helper
69 else:
70 prompt_helper = (
71 prompt_helper
72 or Settings._prompt_helper
73 or PromptHelper.from_llm_metadata(
---> 74 llm.metadata,
75 )
76 )
78 if response_mode == ResponseMode.REFINE:
79 return Refine(
80 llm=llm,
81 callback_manager=callback_manager,
(...)
91 service_context=service_context,
92 )

AttributeError: 'OpenAI' object has no attribute 'metadata'
I belive that something like below is missing, but I dont know how to "attach" it to code in this version

@property
def metadata(self) -> LLMMetadata:
"""Get LLM metadata."""
return LLMMetadata(
context_window=self.context_window,
num_output=self.num_output,
model_name=self.model_name,
)
How have you imported OpenAI?
from openai import OpenAI
You gotta do it via llama-index

Install it first:
pip install llama-index-llms-openai

then import it like
from llama_index.llms.openai import OpenAI
thanks. I will try
Add a reply
Sign up and join the conversation on Discord