Find answers from the community

Updated 2 years ago

hey can we pass headers while

hey, can we pass headers while initializing Embedding model for the service context
Background we want to integrate PortKey.
we can do so in from langchain.chat_models import ChatOpenAI
eg
Plain Text
self.llm = ChatOpenAI(
            model=self.model_name,
            temperature=self.temperature,
            max_tokens=self.max_tokens,
            frequency_penalty=self.frequency_penalty,
            top_p=self.top_p,
            headers = {
                <some_header>
            }
        )

        # LLM Predictor
        self.llm_predictor = LLMPredictor(llm=self.llm)

works
so how can we pass the headers in from llama_index.embeddings.openai import OpenAIEmbedding
cc: @Logan M @jerryjliu0 @ravitheja
S
r
8 comments
sorry, guys it seems Portkey don't suport Embedding endpoint.
sorry for the trouble
hey SIddhant - Portkey does support the embeddings endpoint.

Curious what's a reliable way to pass headers though
now I am confused. 🤔
they said "Portkey support said they are opening a PR to get it fixed. Right now embeddings don’t allow headers."
Llamaindex does not allow headers in their OpenAIEmbeddings class. Spoke to @ravitheja today and I'll raise a PR on LLamaIndex
Meanwhile, there's a simple solution if you want
@jerryjliu0 - can you check this out again? Would be awesome if this gets merged as embeddings can then support the extra parameters
Add a reply
Sign up and join the conversation on Discord