Find answers from the community

Updated 11 months ago

Optimum

@Logan M Trying to use Optimum ONNX Embedding for bge as shown in the documentation examples on my Macbook with M1 Pro. I get this error when I try to test the model with get_text_embedding : InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:token_type_ids. No clue how to debug and fix this, tried looking it up online to understand little to nothing about this
L
V
8 comments
Classic huggingface. Either they updated the library or the model I picked with the notebook I just got lucky.

It's raising an error because token_type_ids is being passed in, but it's unused

Probably need to remove it from the tokenizer output here

https://github.com/run-llama/llama_index/blob/92f82f83f5dc4ea9f236eff066e53df264a8c1f1/llama_index/embeddings/huggingface_optimum.py#L133
Ah, that makes sense lol
On an unrelated note @Logan M would I be able to use OpenRouter to access Mistral 8x7B via the OpenAI Like LLM?
Maybe!

That reminds me, I need to finish the PR, they guy stopped replying lol

https://github.com/run-llama/llama_index/pull/9464
Looking forward to the finished PR!
Lmk if you think I can do anything to help
Merged open router!
Thank you so much!
Add a reply
Sign up and join the conversation on Discord