A community member is trying to use Optimum ONNX Embedding for bge on their Macbook with M1 Pro, but is encountering an error related to the token_type_ids input. Another community member suggests that the error is likely due to the token_type_ids being passed in but unused, and recommends removing it from the tokenizer output. The discussion also includes an unrelated question about using OpenRouter to access Mistral 8x7B via the OpenAI-like LLM, and a mention of a pending pull request that needs to be finished.
@Logan M Trying to use Optimum ONNX Embedding for bge as shown in the documentation examples on my Macbook with M1 Pro. I get this error when I try to test the model with get_text_embedding : InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:token_type_ids. No clue how to debug and fix this, tried looking it up online to understand little to nothing about this