hey, can we pass headers while initializing Embedding model for the service context
Background we want to integrate PortKey.
we can do so in from langchain.chat_models import ChatOpenAI
eg
self.llm = ChatOpenAI(
model=self.model_name,
temperature=self.temperature,
max_tokens=self.max_tokens,
frequency_penalty=self.frequency_penalty,
top_p=self.top_p,
headers = {
<some_header>
}
)
# LLM Predictor
self.llm_predictor = LLMPredictor(llm=self.llm)
works
so how can we pass the headers in from llama_index.embeddings.openai import OpenAIEmbedding
cc: @Logan M @jerryjliu0 @ravitheja