Find answers from the community

Home
Members
Yogesh Kulkarni
Y
Yogesh Kulkarni
Offline, last seen 4 months ago
Joined September 25, 2024
But I guess main reason is, though I am using LLM from HuggngFaceHub, via Service Context, it is still searching for OpenAI

File "C:\Users\yoges\anaconda3\envs\langchain\Lib\site-packages\tenacity__init.py", line 382, in call__
result = fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\yoges\anaconda3\envs\langchain\Lib\site-packages\llama_index\embeddings\openai.py", line 150, in get_embeddings data = openai.Embedding.create(input=list_of_text, model=engine, kwargs).data ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\yoges\anaconda3\envs\langchain\Lib\site-packages\openai\api_resources\embedding.py", line 33, in create response = super().create(args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yoges\anaconda3\envs\langchain\Lib\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 153, in create response, , api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Users\yoges\anaconda3\envs\langchain\Lib\site-packages\openai\api_requestor.py", line 230, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yoges\anaconda3\envs\langchain\Lib\site-packages\openai\api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "C:\Users\yoges\anaconda3\envs\langchain\Lib\site-packages\openai\api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.
Now the error is being generated in langchain land, but the suggested version does not solve it
8 comments
L
Y