Find answers from the community

Updated last year

Hi - having some trouble with local

Hi - having some trouble with local embeddings:
  • Following the basic getting started tutorial
  • Installed LlamaCPP via the tutorial - and it works
  • Installed SentenceTransformers and verified that works
  • Got the basic prompt completion working in the tutorial
When I try to load any kind of local embedding (huggingface, onyx, langchang), I get the following error (in thread):
Y
L
4 comments
Plain Text
ex. embed_model = OptimumEmbedding(folder_name="./bge_onnx")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\yarha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\llama_index\embeddings\huggingface_optimum.py", line 38, in __init__
    from optimum.onnxruntime import ORTModelForFeatureExtraction
  File "C:\Users\yarha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\optimum\onnxruntime\__init__.py", line 18, in <module>
    from ..utils import is_diffusers_available
  File "C:\Users\yarha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\optimum\utils\__init__.py", line 44, in <module>
    from .input_generators import (
  File "C:\Users\yarha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\optimum\utils\input_generators.py", line 29, in <module>
    import torch
  File "C:\Users\yarha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\torch\__init__.py", line 122, in <module>
    raise err
OSError: [WinError 127] The specified procedure could not be found. Error loading "C:\Users\yarha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\torch\lib\cublas64_11.dll" or one of its dependencies.

Any idea what is happening here? cuBlas is already being loaded and used successfully with the LLamaCPP
Oh no windows πŸ˜ͺ
I was never able to get llamacpp working on my windows machine
Update - got this working by loading the embed model first. Don't know if it's a trying-to-load the samel dll twice thing or maybe a slightly different version of cublas, but llamacpp handles that situation better apparently
Add a reply
Sign up and join the conversation on Discord