Find answers from the community

Updated 3 months ago

Hi team We are migrating to latest

Hi team, We are migrating to latest version of llama-index i.e. 0.8.36.
We have the following code to generate the embeddings, But seems like It doesn't work anymore with the latest version:

CODE:
Plain Text
embed_model = self.service_context.embed_model
for node in keyword_nodes:
    embed_model.queue_text_for_embedding(
        node.node.node_id,
        node.node.get_text(),
    )
_, text_embeddings = embed_model.get_queued_text_embeddings()
for idx in range(len(keyword_nodes)):
    keyword_nodes[idx].node.embedding = text_embeddings[idx]

Error: AttributeError: 'OpenAIEmbedding' object has no attribute 'queue_text_for_embedding'

/ / Any help here?
L
A
24 comments
Use text_embeddings = embed_mode.get_text_embedding_batch(texts, show_progress=False)

or if you need async, text_embeddings = await embed_mode.aget_text_embedding_batch(texts, show_progress=False)
We removed the stateful queue
Okay, Got it! Thanks πŸ™
@Logan M It resulted in error:

Plain Text
embed_model = self.service_context.embed_model
        nodes_text = []
        for node in keyword_nodes:
            nodes_text.append(node.node.get_text())

        text_embeddings = embed_model.aget_text_embedding_batch(
            nodes_text, show_progress=False
        )
        for idx, node in enumerate(keyword_nodes):
            keyword_nodes[idx].node.embedding = text_embeddings[idx]

TypeError: 'coroutine' object is not subscriptable
you need to await
if you want async
I would only use async if you are already running inside an async function
otherwise use the sync version
I tried the above using async and await It responded with

text_embeddings = await embed_model.aget_text_embedding_batch(
nodes_text, show_progress=False
)

error_data = resp["error"] KeyError: 'error'

File "/home/jerry/miniconda3/envs/dobby/lib/python3.10/site-packages/openai/api_requestor.py", line 405, in handle_error_response raise error.APIError( openai.error.APIError: Invalid response object from API: '{"status":"failure","message":"Portkey Error: API Key Not Found. Error Code:02"}' (HTTP response code was 401)
Let me try with the sync version
FYI, The above works with older version and doesn't resulted into key error for portkey.
Seems like an issue with portkey, not sure what changed there
I didn't know portkey supported embeddings tbh lol
No problem, Will try with sync version for now. Because migration is important for us. Thanks though
The sync version throws the same portkey api key error
How are you setting up portkey?
The key is stored as env variable, apart from that
we set the portkey configs in headers for the openai models imported from from llama_index.embeddings.openai import OpenAIEmbedding,

and set the openai.api_base to portkey proxy url
FYI, did a package update pip install -U portkey-ai but didn't work
So the key is only in an env variable, and you only modify the api_base ? Can you set the key directly in the headers as well?
Or pass they key as kwarg directly into the embeddings

OpenAIEmbedding(.., api_key=os.environ[..])
Sure, giving it a try
Thanks but didn't work, I guess they have introduced a new concept of virtual keys. That must be messing it up. I need to connect with them to resolve it,

Not sure on above though, Api key is something different, It must be always required
Yes they have switched the keys concept as well
Hey Logan, Just sharing an update, It started working there was a misconfiguration with latest OpenAIEmbedding module from llama-index. Which now accepts a parameter called additional_kwargs instead of headers due to which the headers of portkey were not passed in request.

Thanks for your help πŸ™‚
Add a reply
Sign up and join the conversation on Discord