Find answers from the community

Updated 3 months ago

Chatgpt key

I keep getting missing api key : (
L
C
a
17 comments
Are you using a custom LLM?

There are two models used, the LLM (I.e. chatgpt) and the embedding model (text-ada-002 by default)
I have a separate embedding api i'd like to hit
Hey i ran into this I think
I had to do this: openai.api_key = os.getenv('OPENAI_API_KEY')
when using a custom llm
Only saw it when i used the custom llm, and a openai embedding though
if you all openai then i did not have to do that
Do you mean you have your own embeddings on a server you want to ping? Or you have local embeddings from huggingface?
I might be going off the rails here lol BUT If you have your own hosted embeddings, you can use the langchain SelfHostedEmbeddings class (or any embeddings class from langchain) and then wrap that with the LangChainEmbedding class from llama_index

https://python.langchain.com/en/latest/reference/modules/embeddings.html#langchain.embeddings.SelfHostedEmbeddings

https://github.com/jerryjliu/llama_index/blob/main/gpt_index/embeddings/langchain.py#L11

embed_model = LangChainEmbedding(SelfHostedEmbeddings(...))

service_context = ServiceContext.from_defaults(embed_model=embed_model)
have my hown embeddings on a server
sorry stepped away
trying now with selfhosted embeddings !
seems like self hosted embeddings are for running on compute
i'm testing with a custom api that generate embedings
but i might be able to modify the class
Ah shoot! Yea looking through all the classes in the docs there, you extending the base class might be the best option πŸ€” or something similar anyways
Add a reply
Sign up and join the conversation on Discord